User login
Should patients with gout avoid thiazides for hypertension?
The decision should be individualized, taking into consideration the degree to which the thiazide increases the serum urate level, whether this increase can be managed without overly complicating the patient’s hypouricemic therapy, and, most importantly, what effect switching to another drug will have on the control of the patient’s hypertension. No study has directly addressed this issue.
My practice in most patients, for reasons I explain below, is to use a thiazide if it helps to control the blood pressure and to adjust the dose of the hypouricemic therapy as needed to reduce the serum urate to the desired level.
THIAZIDES REMAIN IMPORTANT IN ANTIHYPERTENSIVE THERAPY
Many patients with gout also have hypertension, perhaps due in part to the same hyperuricemia that caused their gouty arthritis. It is well documented that thiazide diuretics can raise the serum urate level.1 In some studies2 (but not all3), patients using thiazides had a higher incidence of gouty arthritis. Thus, it is reasonable to ask if we should avoid thiazides in patients with coexistent gout and hypertension.
Many hypertensive patients fail to reach their target blood pressures (although with the “looser” recommendations in the latest guidelines,4 we may appear to be doing a better job). The reasons for failing to reach target pressures are complex and many: physicians may simply not be aggressive enough in pursuing a target blood pressure; patients cannot tolerate the drugs or cannot afford the drugs; and many patients need two or more antihypertensive drugs to achieve adequate control. Thiazides are cheap and effective5 and work synergistically with angiotensin-converting enzyme inhibitors and angiotensin receptor blockers.6
Thus, in many patients, avoiding or discontinuing a thiazide may inhibit our ability to control their hypertension, which is a key contributor to cardiovascular events and chronic kidney injury in patients with gout. Since other diuretics (eg, loop diuretics, which can lower blood pressure but often require split doses) also raise the serum urate level, switching to one of them will not eliminate concern over hyperuricemia.
Thiazides and serum urate
Thiazides slightly increase the serum urate level and in a dose-dependent manner. At the doses commonly used in treating hypertension (12.5 or 25 mg once a day), hydrochlorothiazide increases the serum urate level by 0.8 mg/dL or less in patients with normal renal function, as shown in a number of older hypertension treatment trials and in a recent prospective study.1 The effect of chlorthalidone is similar.
In patients with chronic gout treated with a xanthine oxidase inhibitor (allopurinol or febuxostat) to lower the serum urate to the American College of Rheumatology’s recommended target level7 of less than 6.0 mg/dL (or < 5 mg/dL in the British Rheumatology guidelines), this small elevation in serum urate is unlikely to negate the clinical efficacy of these drugs when dosing is optimized. Small studies have demonstrated a clinically insignificant pharmacodynamic interaction between thiazides and xanthine oxidase inhibitors.8,9 When I add a thiazide to a patient’s regimen, I do not usually need to increase the dose of allopurinol significantly to keep the serum urate level below the desired target.
Switch antihypertensive therapy
Occasionally, in a patient with chronic gout and mild hypertension who has a serum urate level marginally above the estimated precipitation threshold of 6.7 mg/dL, it is reasonable to simply switch the thiazide to another antihypertensive, such as losartan. Losartan is a weak uricosuric and can lower the serum urate level slightly, possibly making the addition of another hypouricemic agent unnecessary, while still controlling the blood pressure with a single pill. This decision must be individualized, taking into consideration the efficacy and cost of the alternative antihypertensive drug, as well as the potential but as yet unproven cardiovascular and renal benefits of lowering the serum urate with a more potent hypouricemic to a degree not likely to be attained with losartan alone.
Continue thiazide, adjust gout therapy
Discontinuing a thiazide or switching to another antihypertensive drug may increase the cost and decrease the efficacy of hypertensive therapy. Continuing thiazide therapy and, if necessary, adjusting hypouricemic therapy will not worsen the control of the serum urate level or gouty arthritis, and in most patients will not complicate the management of gout.
ASPIRIN AND HYPERURICEMIA
In answer to a separate but related question, aspirin in low doses for cardioprotection (81 mg daily) also need not be stopped in patients with hyperuricemia or gout in an effort to better control the serum urate level. Low-dose aspirin increases the serum urate level by about 0.3 mg/dL. Since patients with gout have a higher risk of having cardiovascular disease, metabolic syndrome, and chronic kidney disease, many will benefit from low-dose aspirin therapy.
- McAdams DeMarco MA, Maynard JW, Baer AN, et al. Diuretic use, increased serum urate levels, and risk of incident gout in a population-based study of adults with hypertension: the Atherosclerosis Risk in Communities cohort study. Arthritis Rheum 2012; 64:121–129.
- Choi HK, Soriano LC, Zhang Y, Rodríguez LA. Antihypertensive drugs and risk of incident gout among patients with hypertension: population based case-control study. BMJ 2012; 344:d8190.
- Hueskes BA, Roovers EA, Mantel-Teeuwisse AK, Janssens HJ, van de Lisdonk EH, Janssen M. Use of diuretics and the risk of gouty arthritis: a systematic review. Semin Arthritis Rheum 2012; 41:879–889.
- James PA, Oparil S, Carter BL, et al. 2014 Evidence-based guideline for the management of high blood pressure in adults. Report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2013; doi: 10.1001/jama2013.284427
- Fuchs FD. Diuretics: still essential drugs for the management of hypertension. Expert Rev Cardiovasc Ther 2009; 7:591–598.
- Sood N, Reinhart KM, Baker WL. Combination therapy for the management of hypertension: a review of the evidence. Am J Health Syst Pharm 2010; 67:885–894.
- Khanna D, Fitzgerald JD, Khanna PP, et al; American College of Rheumatology. 2012 American College of Rheumatology guidelines for management of gout. Part 1: systematic nonpharmacologic and pharmacologic therapeutic approaches to hyperuricemia. Arthritis Care Res (Hoboken) 2012; 64:1431–1446.
- Löffler W, Landthaler R, de Vries JX, et al. Interaction of allopurinol and hydrochlorothiazide during prolonged oral administration of both drugs in normal subjects. I. Uric acid kinetics. Clin Investig 1994; 72:1071–1075.
- Grabowski B, Khosravan R, Wu JT, Vernillet L, Lademacher C. Effect of hydrochlorothiazide on the pharmacokinetics and pharmacodynamics of febuxostat, a non-purine selective inhibitor of xanthine oxidase. Br J Clin Pharmacol 2010; 70:57–64.
The decision should be individualized, taking into consideration the degree to which the thiazide increases the serum urate level, whether this increase can be managed without overly complicating the patient’s hypouricemic therapy, and, most importantly, what effect switching to another drug will have on the control of the patient’s hypertension. No study has directly addressed this issue.
My practice in most patients, for reasons I explain below, is to use a thiazide if it helps to control the blood pressure and to adjust the dose of the hypouricemic therapy as needed to reduce the serum urate to the desired level.
THIAZIDES REMAIN IMPORTANT IN ANTIHYPERTENSIVE THERAPY
Many patients with gout also have hypertension, perhaps due in part to the same hyperuricemia that caused their gouty arthritis. It is well documented that thiazide diuretics can raise the serum urate level.1 In some studies2 (but not all3), patients using thiazides had a higher incidence of gouty arthritis. Thus, it is reasonable to ask if we should avoid thiazides in patients with coexistent gout and hypertension.
Many hypertensive patients fail to reach their target blood pressures (although with the “looser” recommendations in the latest guidelines,4 we may appear to be doing a better job). The reasons for failing to reach target pressures are complex and many: physicians may simply not be aggressive enough in pursuing a target blood pressure; patients cannot tolerate the drugs or cannot afford the drugs; and many patients need two or more antihypertensive drugs to achieve adequate control. Thiazides are cheap and effective5 and work synergistically with angiotensin-converting enzyme inhibitors and angiotensin receptor blockers.6
Thus, in many patients, avoiding or discontinuing a thiazide may inhibit our ability to control their hypertension, which is a key contributor to cardiovascular events and chronic kidney injury in patients with gout. Since other diuretics (eg, loop diuretics, which can lower blood pressure but often require split doses) also raise the serum urate level, switching to one of them will not eliminate concern over hyperuricemia.
Thiazides and serum urate
Thiazides slightly increase the serum urate level and in a dose-dependent manner. At the doses commonly used in treating hypertension (12.5 or 25 mg once a day), hydrochlorothiazide increases the serum urate level by 0.8 mg/dL or less in patients with normal renal function, as shown in a number of older hypertension treatment trials and in a recent prospective study.1 The effect of chlorthalidone is similar.
In patients with chronic gout treated with a xanthine oxidase inhibitor (allopurinol or febuxostat) to lower the serum urate to the American College of Rheumatology’s recommended target level7 of less than 6.0 mg/dL (or < 5 mg/dL in the British Rheumatology guidelines), this small elevation in serum urate is unlikely to negate the clinical efficacy of these drugs when dosing is optimized. Small studies have demonstrated a clinically insignificant pharmacodynamic interaction between thiazides and xanthine oxidase inhibitors.8,9 When I add a thiazide to a patient’s regimen, I do not usually need to increase the dose of allopurinol significantly to keep the serum urate level below the desired target.
Switch antihypertensive therapy
Occasionally, in a patient with chronic gout and mild hypertension who has a serum urate level marginally above the estimated precipitation threshold of 6.7 mg/dL, it is reasonable to simply switch the thiazide to another antihypertensive, such as losartan. Losartan is a weak uricosuric and can lower the serum urate level slightly, possibly making the addition of another hypouricemic agent unnecessary, while still controlling the blood pressure with a single pill. This decision must be individualized, taking into consideration the efficacy and cost of the alternative antihypertensive drug, as well as the potential but as yet unproven cardiovascular and renal benefits of lowering the serum urate with a more potent hypouricemic to a degree not likely to be attained with losartan alone.
Continue thiazide, adjust gout therapy
Discontinuing a thiazide or switching to another antihypertensive drug may increase the cost and decrease the efficacy of hypertensive therapy. Continuing thiazide therapy and, if necessary, adjusting hypouricemic therapy will not worsen the control of the serum urate level or gouty arthritis, and in most patients will not complicate the management of gout.
ASPIRIN AND HYPERURICEMIA
In answer to a separate but related question, aspirin in low doses for cardioprotection (81 mg daily) also need not be stopped in patients with hyperuricemia or gout in an effort to better control the serum urate level. Low-dose aspirin increases the serum urate level by about 0.3 mg/dL. Since patients with gout have a higher risk of having cardiovascular disease, metabolic syndrome, and chronic kidney disease, many will benefit from low-dose aspirin therapy.
The decision should be individualized, taking into consideration the degree to which the thiazide increases the serum urate level, whether this increase can be managed without overly complicating the patient’s hypouricemic therapy, and, most importantly, what effect switching to another drug will have on the control of the patient’s hypertension. No study has directly addressed this issue.
My practice in most patients, for reasons I explain below, is to use a thiazide if it helps to control the blood pressure and to adjust the dose of the hypouricemic therapy as needed to reduce the serum urate to the desired level.
THIAZIDES REMAIN IMPORTANT IN ANTIHYPERTENSIVE THERAPY
Many patients with gout also have hypertension, perhaps due in part to the same hyperuricemia that caused their gouty arthritis. It is well documented that thiazide diuretics can raise the serum urate level.1 In some studies2 (but not all3), patients using thiazides had a higher incidence of gouty arthritis. Thus, it is reasonable to ask if we should avoid thiazides in patients with coexistent gout and hypertension.
Many hypertensive patients fail to reach their target blood pressures (although with the “looser” recommendations in the latest guidelines,4 we may appear to be doing a better job). The reasons for failing to reach target pressures are complex and many: physicians may simply not be aggressive enough in pursuing a target blood pressure; patients cannot tolerate the drugs or cannot afford the drugs; and many patients need two or more antihypertensive drugs to achieve adequate control. Thiazides are cheap and effective5 and work synergistically with angiotensin-converting enzyme inhibitors and angiotensin receptor blockers.6
Thus, in many patients, avoiding or discontinuing a thiazide may inhibit our ability to control their hypertension, which is a key contributor to cardiovascular events and chronic kidney injury in patients with gout. Since other diuretics (eg, loop diuretics, which can lower blood pressure but often require split doses) also raise the serum urate level, switching to one of them will not eliminate concern over hyperuricemia.
Thiazides and serum urate
Thiazides slightly increase the serum urate level and in a dose-dependent manner. At the doses commonly used in treating hypertension (12.5 or 25 mg once a day), hydrochlorothiazide increases the serum urate level by 0.8 mg/dL or less in patients with normal renal function, as shown in a number of older hypertension treatment trials and in a recent prospective study.1 The effect of chlorthalidone is similar.
In patients with chronic gout treated with a xanthine oxidase inhibitor (allopurinol or febuxostat) to lower the serum urate to the American College of Rheumatology’s recommended target level7 of less than 6.0 mg/dL (or < 5 mg/dL in the British Rheumatology guidelines), this small elevation in serum urate is unlikely to negate the clinical efficacy of these drugs when dosing is optimized. Small studies have demonstrated a clinically insignificant pharmacodynamic interaction between thiazides and xanthine oxidase inhibitors.8,9 When I add a thiazide to a patient’s regimen, I do not usually need to increase the dose of allopurinol significantly to keep the serum urate level below the desired target.
Switch antihypertensive therapy
Occasionally, in a patient with chronic gout and mild hypertension who has a serum urate level marginally above the estimated precipitation threshold of 6.7 mg/dL, it is reasonable to simply switch the thiazide to another antihypertensive, such as losartan. Losartan is a weak uricosuric and can lower the serum urate level slightly, possibly making the addition of another hypouricemic agent unnecessary, while still controlling the blood pressure with a single pill. This decision must be individualized, taking into consideration the efficacy and cost of the alternative antihypertensive drug, as well as the potential but as yet unproven cardiovascular and renal benefits of lowering the serum urate with a more potent hypouricemic to a degree not likely to be attained with losartan alone.
Continue thiazide, adjust gout therapy
Discontinuing a thiazide or switching to another antihypertensive drug may increase the cost and decrease the efficacy of hypertensive therapy. Continuing thiazide therapy and, if necessary, adjusting hypouricemic therapy will not worsen the control of the serum urate level or gouty arthritis, and in most patients will not complicate the management of gout.
ASPIRIN AND HYPERURICEMIA
In answer to a separate but related question, aspirin in low doses for cardioprotection (81 mg daily) also need not be stopped in patients with hyperuricemia or gout in an effort to better control the serum urate level. Low-dose aspirin increases the serum urate level by about 0.3 mg/dL. Since patients with gout have a higher risk of having cardiovascular disease, metabolic syndrome, and chronic kidney disease, many will benefit from low-dose aspirin therapy.
- McAdams DeMarco MA, Maynard JW, Baer AN, et al. Diuretic use, increased serum urate levels, and risk of incident gout in a population-based study of adults with hypertension: the Atherosclerosis Risk in Communities cohort study. Arthritis Rheum 2012; 64:121–129.
- Choi HK, Soriano LC, Zhang Y, Rodríguez LA. Antihypertensive drugs and risk of incident gout among patients with hypertension: population based case-control study. BMJ 2012; 344:d8190.
- Hueskes BA, Roovers EA, Mantel-Teeuwisse AK, Janssens HJ, van de Lisdonk EH, Janssen M. Use of diuretics and the risk of gouty arthritis: a systematic review. Semin Arthritis Rheum 2012; 41:879–889.
- James PA, Oparil S, Carter BL, et al. 2014 Evidence-based guideline for the management of high blood pressure in adults. Report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2013; doi: 10.1001/jama2013.284427
- Fuchs FD. Diuretics: still essential drugs for the management of hypertension. Expert Rev Cardiovasc Ther 2009; 7:591–598.
- Sood N, Reinhart KM, Baker WL. Combination therapy for the management of hypertension: a review of the evidence. Am J Health Syst Pharm 2010; 67:885–894.
- Khanna D, Fitzgerald JD, Khanna PP, et al; American College of Rheumatology. 2012 American College of Rheumatology guidelines for management of gout. Part 1: systematic nonpharmacologic and pharmacologic therapeutic approaches to hyperuricemia. Arthritis Care Res (Hoboken) 2012; 64:1431–1446.
- Löffler W, Landthaler R, de Vries JX, et al. Interaction of allopurinol and hydrochlorothiazide during prolonged oral administration of both drugs in normal subjects. I. Uric acid kinetics. Clin Investig 1994; 72:1071–1075.
- Grabowski B, Khosravan R, Wu JT, Vernillet L, Lademacher C. Effect of hydrochlorothiazide on the pharmacokinetics and pharmacodynamics of febuxostat, a non-purine selective inhibitor of xanthine oxidase. Br J Clin Pharmacol 2010; 70:57–64.
- McAdams DeMarco MA, Maynard JW, Baer AN, et al. Diuretic use, increased serum urate levels, and risk of incident gout in a population-based study of adults with hypertension: the Atherosclerosis Risk in Communities cohort study. Arthritis Rheum 2012; 64:121–129.
- Choi HK, Soriano LC, Zhang Y, Rodríguez LA. Antihypertensive drugs and risk of incident gout among patients with hypertension: population based case-control study. BMJ 2012; 344:d8190.
- Hueskes BA, Roovers EA, Mantel-Teeuwisse AK, Janssens HJ, van de Lisdonk EH, Janssen M. Use of diuretics and the risk of gouty arthritis: a systematic review. Semin Arthritis Rheum 2012; 41:879–889.
- James PA, Oparil S, Carter BL, et al. 2014 Evidence-based guideline for the management of high blood pressure in adults. Report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2013; doi: 10.1001/jama2013.284427
- Fuchs FD. Diuretics: still essential drugs for the management of hypertension. Expert Rev Cardiovasc Ther 2009; 7:591–598.
- Sood N, Reinhart KM, Baker WL. Combination therapy for the management of hypertension: a review of the evidence. Am J Health Syst Pharm 2010; 67:885–894.
- Khanna D, Fitzgerald JD, Khanna PP, et al; American College of Rheumatology. 2012 American College of Rheumatology guidelines for management of gout. Part 1: systematic nonpharmacologic and pharmacologic therapeutic approaches to hyperuricemia. Arthritis Care Res (Hoboken) 2012; 64:1431–1446.
- Löffler W, Landthaler R, de Vries JX, et al. Interaction of allopurinol and hydrochlorothiazide during prolonged oral administration of both drugs in normal subjects. I. Uric acid kinetics. Clin Investig 1994; 72:1071–1075.
- Grabowski B, Khosravan R, Wu JT, Vernillet L, Lademacher C. Effect of hydrochlorothiazide on the pharmacokinetics and pharmacodynamics of febuxostat, a non-purine selective inhibitor of xanthine oxidase. Br J Clin Pharmacol 2010; 70:57–64.
The impact of anti-TNF therapy on the nonspecialist
About 15 years ago, the first anti-tumor necrosis factor (anti-TNF) drugs received approval for treating Crohn disease and rheumatoid arthritis, and a new era of pharmacotherapy was born. A few years before that, I was at a meeting discussing the potential benefits and pitfalls of these new biologic therapies, and I opined that no one would pay for them on an ongoing basis unless they were amazingly effective—which was unlikely, as the drugs only affected a single cytokine. And if they were effective, they would undoubtedly be associated with a host of opportunistic infections. Given my predictive skills, it is no surprise that Warren Buffett rarely calls to ask my opinion.
Clearly, anti-TNF drugs are effective and have raised the bar for how we define successful response to therapy. But recent studies in early rheumatoid arthritis indicate that they may not be much better than traditional combination therapy or monotherapy with methotrexate if the methotrexate and the other drugs are given and tolerated at full dose. This is clearly not the case for other inflammatory diseases.
Anti-TNF drugs and other biologics are now part of the arsenal of most medical specialists, so outpatient internists and hospitalists increasingly encounter patients taking these drugs. Since patients with systemic inflammatory disease have an increased prevalence of cardiovascular disease, cardiologists are also seeing more patients taking these drugs. Thus, the overview by Hadam et al in this issue of the Journal on the risks of biologic therapies is relevant to many readers.
Almost all prescriptions and requests for insurance approval for these drugs are written by subspecialists familiar with their risks. But patients may ask their primary care physicians about the tests and vaccines recommended for those about to start anti-TNF therapy. Before starting anti-TNF therapy, all patients should be tested for previous exposure to tuberculosis and should be treated for latent tuberculosis if appropriate. Blocking TNF leads to a breakdown of the protective granulomatous inflammatory response that contains the mycobacteria and, as with corticosteroid treatment, results in reactivation of the disease. Interestingly, the reactivation is quite often not in the lungs. And since anti-TNF therapy dramatically blunts the inflammatory response, as does corticosteroid therapy, reactivation may appear as nonspecific malaise or may be misinterpreted as a flare in the underlying disease, and thus it may go undiagnosed. Patients should also be screened for exposure to hepatitis B virus. Vaccines, particularly live vaccines, are generally given if possible before starting anti-TNF therapy, and all patients on chronic therapy should get annual influenza vaccines.
Despite initial concerns about a dramatically increased risk of routine and opportunistic infections in patients on anti-TNF therapy, this has not been observed. Even in the perioperative setting, the increased risk of infection is modest. What has struck me, however, is the way these drugs, like steroids, blunt and mask the signs of infection. I have seen deep soft-tissue, intra-abdominal, and native and prosthetic joint infections go unsuspected for days or even weeks in the absence of significant fever, elevation in acute-phase markers, or dramatic local findings. We must be extra vigilant.
There is a fear of malignancy arising or recurring in patients on anti-TNF therapy. This fear is certainly promoted by the required black-box warning about the risk of lymphoma and other malignancies that these drugs carry. The evidence of a significant increase in risk of malignancies other than hepatosplenic T-cell lymphoma in children and nonmelanoma skin cancers is not strong and is likely slanted by an increased risk of certain malignancies associated with the underlying rheumatic disease and other previous therapies. Nonetheless, I am reluctant to use these drugs in patients with a history of melanoma.
We still have much to learn about these drugs. Why are specific agents more effective in some diseases than others? For example, etanercept treats rheumatoid arthritis but not Crohn disease. Also, we still do not know how they can elicit reversible demyelinating disorders or autoantibodies with or without associated drug-induced lupus syndromes. Even odder is the occurrence of psoriasis induced by anti-TNF drugs, despite their being used to treat psoriasis.
My initial skepticism regarding anti-TNF drugs was unjustified. They are being tested and used successfully in an increasing number of diseases. But we all need to increase our familiarity with their unique risks and somehow find a way to deal with their unique cost.
About 15 years ago, the first anti-tumor necrosis factor (anti-TNF) drugs received approval for treating Crohn disease and rheumatoid arthritis, and a new era of pharmacotherapy was born. A few years before that, I was at a meeting discussing the potential benefits and pitfalls of these new biologic therapies, and I opined that no one would pay for them on an ongoing basis unless they were amazingly effective—which was unlikely, as the drugs only affected a single cytokine. And if they were effective, they would undoubtedly be associated with a host of opportunistic infections. Given my predictive skills, it is no surprise that Warren Buffett rarely calls to ask my opinion.
Clearly, anti-TNF drugs are effective and have raised the bar for how we define successful response to therapy. But recent studies in early rheumatoid arthritis indicate that they may not be much better than traditional combination therapy or monotherapy with methotrexate if the methotrexate and the other drugs are given and tolerated at full dose. This is clearly not the case for other inflammatory diseases.
Anti-TNF drugs and other biologics are now part of the arsenal of most medical specialists, so outpatient internists and hospitalists increasingly encounter patients taking these drugs. Since patients with systemic inflammatory disease have an increased prevalence of cardiovascular disease, cardiologists are also seeing more patients taking these drugs. Thus, the overview by Hadam et al in this issue of the Journal on the risks of biologic therapies is relevant to many readers.
Almost all prescriptions and requests for insurance approval for these drugs are written by subspecialists familiar with their risks. But patients may ask their primary care physicians about the tests and vaccines recommended for those about to start anti-TNF therapy. Before starting anti-TNF therapy, all patients should be tested for previous exposure to tuberculosis and should be treated for latent tuberculosis if appropriate. Blocking TNF leads to a breakdown of the protective granulomatous inflammatory response that contains the mycobacteria and, as with corticosteroid treatment, results in reactivation of the disease. Interestingly, the reactivation is quite often not in the lungs. And since anti-TNF therapy dramatically blunts the inflammatory response, as does corticosteroid therapy, reactivation may appear as nonspecific malaise or may be misinterpreted as a flare in the underlying disease, and thus it may go undiagnosed. Patients should also be screened for exposure to hepatitis B virus. Vaccines, particularly live vaccines, are generally given if possible before starting anti-TNF therapy, and all patients on chronic therapy should get annual influenza vaccines.
Despite initial concerns about a dramatically increased risk of routine and opportunistic infections in patients on anti-TNF therapy, this has not been observed. Even in the perioperative setting, the increased risk of infection is modest. What has struck me, however, is the way these drugs, like steroids, blunt and mask the signs of infection. I have seen deep soft-tissue, intra-abdominal, and native and prosthetic joint infections go unsuspected for days or even weeks in the absence of significant fever, elevation in acute-phase markers, or dramatic local findings. We must be extra vigilant.
There is a fear of malignancy arising or recurring in patients on anti-TNF therapy. This fear is certainly promoted by the required black-box warning about the risk of lymphoma and other malignancies that these drugs carry. The evidence of a significant increase in risk of malignancies other than hepatosplenic T-cell lymphoma in children and nonmelanoma skin cancers is not strong and is likely slanted by an increased risk of certain malignancies associated with the underlying rheumatic disease and other previous therapies. Nonetheless, I am reluctant to use these drugs in patients with a history of melanoma.
We still have much to learn about these drugs. Why are specific agents more effective in some diseases than others? For example, etanercept treats rheumatoid arthritis but not Crohn disease. Also, we still do not know how they can elicit reversible demyelinating disorders or autoantibodies with or without associated drug-induced lupus syndromes. Even odder is the occurrence of psoriasis induced by anti-TNF drugs, despite their being used to treat psoriasis.
My initial skepticism regarding anti-TNF drugs was unjustified. They are being tested and used successfully in an increasing number of diseases. But we all need to increase our familiarity with their unique risks and somehow find a way to deal with their unique cost.
About 15 years ago, the first anti-tumor necrosis factor (anti-TNF) drugs received approval for treating Crohn disease and rheumatoid arthritis, and a new era of pharmacotherapy was born. A few years before that, I was at a meeting discussing the potential benefits and pitfalls of these new biologic therapies, and I opined that no one would pay for them on an ongoing basis unless they were amazingly effective—which was unlikely, as the drugs only affected a single cytokine. And if they were effective, they would undoubtedly be associated with a host of opportunistic infections. Given my predictive skills, it is no surprise that Warren Buffett rarely calls to ask my opinion.
Clearly, anti-TNF drugs are effective and have raised the bar for how we define successful response to therapy. But recent studies in early rheumatoid arthritis indicate that they may not be much better than traditional combination therapy or monotherapy with methotrexate if the methotrexate and the other drugs are given and tolerated at full dose. This is clearly not the case for other inflammatory diseases.
Anti-TNF drugs and other biologics are now part of the arsenal of most medical specialists, so outpatient internists and hospitalists increasingly encounter patients taking these drugs. Since patients with systemic inflammatory disease have an increased prevalence of cardiovascular disease, cardiologists are also seeing more patients taking these drugs. Thus, the overview by Hadam et al in this issue of the Journal on the risks of biologic therapies is relevant to many readers.
Almost all prescriptions and requests for insurance approval for these drugs are written by subspecialists familiar with their risks. But patients may ask their primary care physicians about the tests and vaccines recommended for those about to start anti-TNF therapy. Before starting anti-TNF therapy, all patients should be tested for previous exposure to tuberculosis and should be treated for latent tuberculosis if appropriate. Blocking TNF leads to a breakdown of the protective granulomatous inflammatory response that contains the mycobacteria and, as with corticosteroid treatment, results in reactivation of the disease. Interestingly, the reactivation is quite often not in the lungs. And since anti-TNF therapy dramatically blunts the inflammatory response, as does corticosteroid therapy, reactivation may appear as nonspecific malaise or may be misinterpreted as a flare in the underlying disease, and thus it may go undiagnosed. Patients should also be screened for exposure to hepatitis B virus. Vaccines, particularly live vaccines, are generally given if possible before starting anti-TNF therapy, and all patients on chronic therapy should get annual influenza vaccines.
Despite initial concerns about a dramatically increased risk of routine and opportunistic infections in patients on anti-TNF therapy, this has not been observed. Even in the perioperative setting, the increased risk of infection is modest. What has struck me, however, is the way these drugs, like steroids, blunt and mask the signs of infection. I have seen deep soft-tissue, intra-abdominal, and native and prosthetic joint infections go unsuspected for days or even weeks in the absence of significant fever, elevation in acute-phase markers, or dramatic local findings. We must be extra vigilant.
There is a fear of malignancy arising or recurring in patients on anti-TNF therapy. This fear is certainly promoted by the required black-box warning about the risk of lymphoma and other malignancies that these drugs carry. The evidence of a significant increase in risk of malignancies other than hepatosplenic T-cell lymphoma in children and nonmelanoma skin cancers is not strong and is likely slanted by an increased risk of certain malignancies associated with the underlying rheumatic disease and other previous therapies. Nonetheless, I am reluctant to use these drugs in patients with a history of melanoma.
We still have much to learn about these drugs. Why are specific agents more effective in some diseases than others? For example, etanercept treats rheumatoid arthritis but not Crohn disease. Also, we still do not know how they can elicit reversible demyelinating disorders or autoantibodies with or without associated drug-induced lupus syndromes. Even odder is the occurrence of psoriasis induced by anti-TNF drugs, despite their being used to treat psoriasis.
My initial skepticism regarding anti-TNF drugs was unjustified. They are being tested and used successfully in an increasing number of diseases. But we all need to increase our familiarity with their unique risks and somehow find a way to deal with their unique cost.
Return of the 'pisse-mongers,' this time with data
Great effort has been spent on identifying easily measured biomarkers to predict the progression of coronary disease and chronic kidney disease (CKD). Interestingly, these two disease processes seem to share some biomarkers and perhaps some pathogenic mechanisms. An ultimate hope is that some of these markers will be found to also contribute directly to organ dysfunction and be amenable to therapy. Blood pressure and (in many people’s minds) low-density lipoprotein cholesterol fulfill this hope. The jury remains out on C-reactive protein and serum urate. There are others.
In this issue of the Journal, Stephen et al review the data indicating that albuminuria helps predict the progression of CKD, coronary disease, ventricular remodeling, and, in some studies, all-cause mortality. Proteinuria has generally been assumed to be a marker of renal injury, but, the authors point out, albumin can under some circumstances initiate inflammatory mechanisms and stimulate mediators of fibrosis.
Although not mentioned by Stephen et al, albumin (like hemoglobin) is susceptible to nonenzymatic glycosylation in patients with diabetes. There is a hint in the literature that glycosylated albumin may be preferentially excreted. Its effects on various tissues are incompletely studied, but it strikes me that perhaps this molecule plays a unique pathogenic role in diabetic renal and vascular disease, even more than native albumin. Further evaluation of this specific marker may lead to even stronger associations (although in a select population of patients with poorly controlled diabetes).
The focus on urine as a fluid with diagnostic and predictive characteristics is certainly not new. Both Hippocrates and Galen recognized the value of inspecting urine. Uroscopy (now urinalysis) may be the oldest surviving laboratory test. Recently, my friend Joe Nally, a coauthor with Stephen et al, shared with me a paper detailing the romantic yet checkered history of urinalysis.1
Gilles de Corbeil in the 12th century wrote a poem on judging urine, intending it as an aid for remembering the supposed 20 different diagnostic colors of urine and describing in detail the use of the urine flask, a bladder-shaped container for studying the partitioning of the urine colors and substance as representative of the diseased parts of the body. A urine flask was even illustrated in a version of Chaucer’s Canterbury Tales as a recognized accoutrement of the stylish physician (Figure 1). The “art” of uroscopy grew so successful over the centuries as a component of rampant medical charlatanry (casting no aspersions, of course, on current nephrologists) that the Royal College of Physicians in 1601 felt pressed to attack the “pisse-mongers” by stating, “It is ridiculous and foolish to divine the…course of disease…from the inspection of urine.”1 This dictate was apparently ignored then, but seemingly is too frequently followed by clinicians today, contributing to the oft-delayed diagnosis of glomerulonephritis and other renal diseases.
In 1637, Thomas Brian published The Pisse-Prophet or Certaine Pisse Pot Lectures, in which he railed against the witchcraft of uroscopy, which he said should only be performed by university-trained physicians. Jump forward to 1827, when Richard Bright elegantly described acute glomerulonephritis, although not the microscopic findings that would be illustrated in accurate detail by Golding Bird in his 1844 treatise, Urinary Deposits. Sitting on the bookshelf behind my desk is a copy of Richard W. Lippman’s Urine and Urinary Sediment: A Practical Manual and Atlas (1957). I have no urine flask—rheumatologists know their limitations.
As we enter 2014, all of us at the Journal offer you our sincere wishes for a personally healthy and universally peaceful new year.
- Haber MH. Pisse prophecy: a brief history of urinalysis. Clin Lab Med 1988; 8:415–430.
Great effort has been spent on identifying easily measured biomarkers to predict the progression of coronary disease and chronic kidney disease (CKD). Interestingly, these two disease processes seem to share some biomarkers and perhaps some pathogenic mechanisms. An ultimate hope is that some of these markers will be found to also contribute directly to organ dysfunction and be amenable to therapy. Blood pressure and (in many people’s minds) low-density lipoprotein cholesterol fulfill this hope. The jury remains out on C-reactive protein and serum urate. There are others.
In this issue of the Journal, Stephen et al review the data indicating that albuminuria helps predict the progression of CKD, coronary disease, ventricular remodeling, and, in some studies, all-cause mortality. Proteinuria has generally been assumed to be a marker of renal injury, but, the authors point out, albumin can under some circumstances initiate inflammatory mechanisms and stimulate mediators of fibrosis.
Although not mentioned by Stephen et al, albumin (like hemoglobin) is susceptible to nonenzymatic glycosylation in patients with diabetes. There is a hint in the literature that glycosylated albumin may be preferentially excreted. Its effects on various tissues are incompletely studied, but it strikes me that perhaps this molecule plays a unique pathogenic role in diabetic renal and vascular disease, even more than native albumin. Further evaluation of this specific marker may lead to even stronger associations (although in a select population of patients with poorly controlled diabetes).
The focus on urine as a fluid with diagnostic and predictive characteristics is certainly not new. Both Hippocrates and Galen recognized the value of inspecting urine. Uroscopy (now urinalysis) may be the oldest surviving laboratory test. Recently, my friend Joe Nally, a coauthor with Stephen et al, shared with me a paper detailing the romantic yet checkered history of urinalysis.1
Gilles de Corbeil in the 12th century wrote a poem on judging urine, intending it as an aid for remembering the supposed 20 different diagnostic colors of urine and describing in detail the use of the urine flask, a bladder-shaped container for studying the partitioning of the urine colors and substance as representative of the diseased parts of the body. A urine flask was even illustrated in a version of Chaucer’s Canterbury Tales as a recognized accoutrement of the stylish physician (Figure 1). The “art” of uroscopy grew so successful over the centuries as a component of rampant medical charlatanry (casting no aspersions, of course, on current nephrologists) that the Royal College of Physicians in 1601 felt pressed to attack the “pisse-mongers” by stating, “It is ridiculous and foolish to divine the…course of disease…from the inspection of urine.”1 This dictate was apparently ignored then, but seemingly is too frequently followed by clinicians today, contributing to the oft-delayed diagnosis of glomerulonephritis and other renal diseases.
In 1637, Thomas Brian published The Pisse-Prophet or Certaine Pisse Pot Lectures, in which he railed against the witchcraft of uroscopy, which he said should only be performed by university-trained physicians. Jump forward to 1827, when Richard Bright elegantly described acute glomerulonephritis, although not the microscopic findings that would be illustrated in accurate detail by Golding Bird in his 1844 treatise, Urinary Deposits. Sitting on the bookshelf behind my desk is a copy of Richard W. Lippman’s Urine and Urinary Sediment: A Practical Manual and Atlas (1957). I have no urine flask—rheumatologists know their limitations.
As we enter 2014, all of us at the Journal offer you our sincere wishes for a personally healthy and universally peaceful new year.
Great effort has been spent on identifying easily measured biomarkers to predict the progression of coronary disease and chronic kidney disease (CKD). Interestingly, these two disease processes seem to share some biomarkers and perhaps some pathogenic mechanisms. An ultimate hope is that some of these markers will be found to also contribute directly to organ dysfunction and be amenable to therapy. Blood pressure and (in many people’s minds) low-density lipoprotein cholesterol fulfill this hope. The jury remains out on C-reactive protein and serum urate. There are others.
In this issue of the Journal, Stephen et al review the data indicating that albuminuria helps predict the progression of CKD, coronary disease, ventricular remodeling, and, in some studies, all-cause mortality. Proteinuria has generally been assumed to be a marker of renal injury, but, the authors point out, albumin can under some circumstances initiate inflammatory mechanisms and stimulate mediators of fibrosis.
Although not mentioned by Stephen et al, albumin (like hemoglobin) is susceptible to nonenzymatic glycosylation in patients with diabetes. There is a hint in the literature that glycosylated albumin may be preferentially excreted. Its effects on various tissues are incompletely studied, but it strikes me that perhaps this molecule plays a unique pathogenic role in diabetic renal and vascular disease, even more than native albumin. Further evaluation of this specific marker may lead to even stronger associations (although in a select population of patients with poorly controlled diabetes).
The focus on urine as a fluid with diagnostic and predictive characteristics is certainly not new. Both Hippocrates and Galen recognized the value of inspecting urine. Uroscopy (now urinalysis) may be the oldest surviving laboratory test. Recently, my friend Joe Nally, a coauthor with Stephen et al, shared with me a paper detailing the romantic yet checkered history of urinalysis.1
Gilles de Corbeil in the 12th century wrote a poem on judging urine, intending it as an aid for remembering the supposed 20 different diagnostic colors of urine and describing in detail the use of the urine flask, a bladder-shaped container for studying the partitioning of the urine colors and substance as representative of the diseased parts of the body. A urine flask was even illustrated in a version of Chaucer’s Canterbury Tales as a recognized accoutrement of the stylish physician (Figure 1). The “art” of uroscopy grew so successful over the centuries as a component of rampant medical charlatanry (casting no aspersions, of course, on current nephrologists) that the Royal College of Physicians in 1601 felt pressed to attack the “pisse-mongers” by stating, “It is ridiculous and foolish to divine the…course of disease…from the inspection of urine.”1 This dictate was apparently ignored then, but seemingly is too frequently followed by clinicians today, contributing to the oft-delayed diagnosis of glomerulonephritis and other renal diseases.
In 1637, Thomas Brian published The Pisse-Prophet or Certaine Pisse Pot Lectures, in which he railed against the witchcraft of uroscopy, which he said should only be performed by university-trained physicians. Jump forward to 1827, when Richard Bright elegantly described acute glomerulonephritis, although not the microscopic findings that would be illustrated in accurate detail by Golding Bird in his 1844 treatise, Urinary Deposits. Sitting on the bookshelf behind my desk is a copy of Richard W. Lippman’s Urine and Urinary Sediment: A Practical Manual and Atlas (1957). I have no urine flask—rheumatologists know their limitations.
As we enter 2014, all of us at the Journal offer you our sincere wishes for a personally healthy and universally peaceful new year.
- Haber MH. Pisse prophecy: a brief history of urinalysis. Clin Lab Med 1988; 8:415–430.
- Haber MH. Pisse prophecy: a brief history of urinalysis. Clin Lab Med 1988; 8:415–430.
To Phil, adieu with many thanks and much gratitude
Phil Canuto, the executive editor of the Cleveland Clinic Journal of Medicine for almost 20 years, is retiring. Known to relatively few of our authors and peer reviewers, Phil has been the invisible force behind the current print and digital face and body of CCJM.
Few medical journals have a persona that connects with their readers, relating in ways that lead to a bonding between reader and journal that extends beyond the content of the monthly articles. We have strived to attain such a relationship with you, our readers, and I will take the liberty of assuming we have to some extent succeeded. I devote my space this month to talking with you about Phil and his relationship with the Journal.
I have frequently described our journalistic mission as publishing articles for our readers—not for our authors. Phil helped translate this concept into reality by insisting that articles be readable and understandable and always have a clearly stated “bottom-line” message.
Phil joined the CCJM in 1995. He came with genuine journalistic and writing skills and a conviction that medical writing for and by physicians could and should have the same clarity that provides effective information transfer in other venues. He had previously worked as a reporter and medical writer at The Akron Beacon Journal newspaper, and prior to that had been the public information officer for the USDA Food and Nutrition Service. He holds a master’s degree from Medill School of Journalism at Northwestern University.
Phil incorporated basic and sound principles of writing into CCJM, something still not uniformly done in medical journals. He pushed for each article to tell a story and clearly communicate a message to the practicing clinician that could translate into improved patient care. Bright and experienced expert clinicians were coaxed to translate their complex topics and opinions into educational messages that were accurate, relevant, and accessible. Based on unsolicited feedback from our readers and the results of standard media industry surveys, he was right on target: clarity is not the antithesis of erudition (although not all authors have shared this perspective).
He was no publishing Luddite. Phil was the driver behind continuously upgrading our open-access CCJM website—enhancing CME options, creating apps for other media, incorporating an online manuscript-tracking system, and tracking and cataloguing patterns of reader use in order to link growth to the needs of our readers. He enabled CCJM to become an early routine user of plagiarism-detection software. With all of this forward positioning, he also found time to champion the electronic archiving of all 81 years of the Journal (which you can now freely access on the Journal’s website).
These very tangible and significant contributions pale in comparison with his impact on the internal operations of the Journal and on my own maturation as editor in chief (and I speak here as well on behalf of previous physician editors). He has been a constant voice of reason, somehow able to recognize potential controversies and develop strategies to ameliorate the personal conflict while not minimizing valid intellectual differences.
A product of the publication pressures of daily newspapers, he would overlook no opportunity to remind me to move manuscripts along and think of potential topics that we should discuss—his admonition to “feed the beast” is stenciled indelibly in my brain. And he never excused himself from equal responsibility for the feeding. He regularly perused subspecialty journals looking for advances in treatment and diagnosis and through many (fortunately well-weathered) medical adventures, few of his physicians have escaped his probing question, “What’s coming that internists should know about, and who can write about it?”
We will miss his equipoise in dealing with the multiple challenges that frequently arise in the running of a monthly journal. We will miss his many skills, and his enthusiasm and commitment to the Journal’s success in achieving our mission. And I will miss his advice, his creativity, his balanced counsel and support, and his willingness to edit and provide honest feedback on whatever writings I sent his way.
From all of us at CCJM, thank you, Phil, for being you, and for a job very well done. Sleep late and read the newspaper.
PS: Phil—Please note that although too wordy, I at least introduced the “story line” in the first sentence.
Phil Canuto, the executive editor of the Cleveland Clinic Journal of Medicine for almost 20 years, is retiring. Known to relatively few of our authors and peer reviewers, Phil has been the invisible force behind the current print and digital face and body of CCJM.
Few medical journals have a persona that connects with their readers, relating in ways that lead to a bonding between reader and journal that extends beyond the content of the monthly articles. We have strived to attain such a relationship with you, our readers, and I will take the liberty of assuming we have to some extent succeeded. I devote my space this month to talking with you about Phil and his relationship with the Journal.
I have frequently described our journalistic mission as publishing articles for our readers—not for our authors. Phil helped translate this concept into reality by insisting that articles be readable and understandable and always have a clearly stated “bottom-line” message.
Phil joined the CCJM in 1995. He came with genuine journalistic and writing skills and a conviction that medical writing for and by physicians could and should have the same clarity that provides effective information transfer in other venues. He had previously worked as a reporter and medical writer at The Akron Beacon Journal newspaper, and prior to that had been the public information officer for the USDA Food and Nutrition Service. He holds a master’s degree from Medill School of Journalism at Northwestern University.
Phil incorporated basic and sound principles of writing into CCJM, something still not uniformly done in medical journals. He pushed for each article to tell a story and clearly communicate a message to the practicing clinician that could translate into improved patient care. Bright and experienced expert clinicians were coaxed to translate their complex topics and opinions into educational messages that were accurate, relevant, and accessible. Based on unsolicited feedback from our readers and the results of standard media industry surveys, he was right on target: clarity is not the antithesis of erudition (although not all authors have shared this perspective).
He was no publishing Luddite. Phil was the driver behind continuously upgrading our open-access CCJM website—enhancing CME options, creating apps for other media, incorporating an online manuscript-tracking system, and tracking and cataloguing patterns of reader use in order to link growth to the needs of our readers. He enabled CCJM to become an early routine user of plagiarism-detection software. With all of this forward positioning, he also found time to champion the electronic archiving of all 81 years of the Journal (which you can now freely access on the Journal’s website).
These very tangible and significant contributions pale in comparison with his impact on the internal operations of the Journal and on my own maturation as editor in chief (and I speak here as well on behalf of previous physician editors). He has been a constant voice of reason, somehow able to recognize potential controversies and develop strategies to ameliorate the personal conflict while not minimizing valid intellectual differences.
A product of the publication pressures of daily newspapers, he would overlook no opportunity to remind me to move manuscripts along and think of potential topics that we should discuss—his admonition to “feed the beast” is stenciled indelibly in my brain. And he never excused himself from equal responsibility for the feeding. He regularly perused subspecialty journals looking for advances in treatment and diagnosis and through many (fortunately well-weathered) medical adventures, few of his physicians have escaped his probing question, “What’s coming that internists should know about, and who can write about it?”
We will miss his equipoise in dealing with the multiple challenges that frequently arise in the running of a monthly journal. We will miss his many skills, and his enthusiasm and commitment to the Journal’s success in achieving our mission. And I will miss his advice, his creativity, his balanced counsel and support, and his willingness to edit and provide honest feedback on whatever writings I sent his way.
From all of us at CCJM, thank you, Phil, for being you, and for a job very well done. Sleep late and read the newspaper.
PS: Phil—Please note that although too wordy, I at least introduced the “story line” in the first sentence.
Phil Canuto, the executive editor of the Cleveland Clinic Journal of Medicine for almost 20 years, is retiring. Known to relatively few of our authors and peer reviewers, Phil has been the invisible force behind the current print and digital face and body of CCJM.
Few medical journals have a persona that connects with their readers, relating in ways that lead to a bonding between reader and journal that extends beyond the content of the monthly articles. We have strived to attain such a relationship with you, our readers, and I will take the liberty of assuming we have to some extent succeeded. I devote my space this month to talking with you about Phil and his relationship with the Journal.
I have frequently described our journalistic mission as publishing articles for our readers—not for our authors. Phil helped translate this concept into reality by insisting that articles be readable and understandable and always have a clearly stated “bottom-line” message.
Phil joined the CCJM in 1995. He came with genuine journalistic and writing skills and a conviction that medical writing for and by physicians could and should have the same clarity that provides effective information transfer in other venues. He had previously worked as a reporter and medical writer at The Akron Beacon Journal newspaper, and prior to that had been the public information officer for the USDA Food and Nutrition Service. He holds a master’s degree from Medill School of Journalism at Northwestern University.
Phil incorporated basic and sound principles of writing into CCJM, something still not uniformly done in medical journals. He pushed for each article to tell a story and clearly communicate a message to the practicing clinician that could translate into improved patient care. Bright and experienced expert clinicians were coaxed to translate their complex topics and opinions into educational messages that were accurate, relevant, and accessible. Based on unsolicited feedback from our readers and the results of standard media industry surveys, he was right on target: clarity is not the antithesis of erudition (although not all authors have shared this perspective).
He was no publishing Luddite. Phil was the driver behind continuously upgrading our open-access CCJM website—enhancing CME options, creating apps for other media, incorporating an online manuscript-tracking system, and tracking and cataloguing patterns of reader use in order to link growth to the needs of our readers. He enabled CCJM to become an early routine user of plagiarism-detection software. With all of this forward positioning, he also found time to champion the electronic archiving of all 81 years of the Journal (which you can now freely access on the Journal’s website).
These very tangible and significant contributions pale in comparison with his impact on the internal operations of the Journal and on my own maturation as editor in chief (and I speak here as well on behalf of previous physician editors). He has been a constant voice of reason, somehow able to recognize potential controversies and develop strategies to ameliorate the personal conflict while not minimizing valid intellectual differences.
A product of the publication pressures of daily newspapers, he would overlook no opportunity to remind me to move manuscripts along and think of potential topics that we should discuss—his admonition to “feed the beast” is stenciled indelibly in my brain. And he never excused himself from equal responsibility for the feeding. He regularly perused subspecialty journals looking for advances in treatment and diagnosis and through many (fortunately well-weathered) medical adventures, few of his physicians have escaped his probing question, “What’s coming that internists should know about, and who can write about it?”
We will miss his equipoise in dealing with the multiple challenges that frequently arise in the running of a monthly journal. We will miss his many skills, and his enthusiasm and commitment to the Journal’s success in achieving our mission. And I will miss his advice, his creativity, his balanced counsel and support, and his willingness to edit and provide honest feedback on whatever writings I sent his way.
From all of us at CCJM, thank you, Phil, for being you, and for a job very well done. Sleep late and read the newspaper.
PS: Phil—Please note that although too wordy, I at least introduced the “story line” in the first sentence.
When people with diabetes go to surgery
Over the past decade, recommendations about the ideal glucose target in hospitalized diabetic patients have fluctuated. The controversy has extended to diabetic patients in various types of intensive care units and to those headed to the operating room. Although proposals exist on how to manage diabetes in the operating room, including intraoperative insulin infusions, outcomes probably depend more on how glucose is managed during the patient’s postoperative stay in the hospital. For patients who are less critically ill and less medically complex, continuous insulin infusions are used infrequently, and insulin is often prescribed by algorithm or, archaically, by some form of “catch-up” sliding scale. Studies indicate that even the fairly loose glucose target of 70 to 180 mg/dL is achieved consistently in only a few patients.1
In view of a number of observations, including the link between hyperglycemia and postoperative wound infections, studies were designed to test the hypothesis that aggressively keeping glucose levels quite low in critically ill and postoperative diabetic patients would be beneficial. Instead, most of these studies found that overly tight glucose control in these settings led to untoward outcomes—and not only as the result of hypoglycemic episodes. Aiming for a modest serum glucose target of 150 to 200 mg/dL can significantly reduce the postoperative death rate, but the beneficial reduction is no greater if the target is less than 150 mg/dL.
With a looser glucose target, pre- and perioperative management of insulin-dependent diabetic patients can be simplified. Dobri and Lansang discuss the key practical principles of managing insulin before the patient goes to the operating suite. They emphasize relevant pearls of insulin physiology and discuss several scenarios we often encounter.
In fact, the principles they review are equally useful to remember when we admit diabetic patients to the hospital with orders to keep them “npo” while planning and awaiting tests or other procedures. A key take-home point is that severely insulinopenic patients require some exogenous basal insulin, even when not eating.
- Lopes R, Albrecht A, Williams J, et al. Postoperative glucose control following coronary artery bypass graft surgery: predictors and clinical outcomes. J Am Coll Cardiol 2013; 61:e1601.
Over the past decade, recommendations about the ideal glucose target in hospitalized diabetic patients have fluctuated. The controversy has extended to diabetic patients in various types of intensive care units and to those headed to the operating room. Although proposals exist on how to manage diabetes in the operating room, including intraoperative insulin infusions, outcomes probably depend more on how glucose is managed during the patient’s postoperative stay in the hospital. For patients who are less critically ill and less medically complex, continuous insulin infusions are used infrequently, and insulin is often prescribed by algorithm or, archaically, by some form of “catch-up” sliding scale. Studies indicate that even the fairly loose glucose target of 70 to 180 mg/dL is achieved consistently in only a few patients.1
In view of a number of observations, including the link between hyperglycemia and postoperative wound infections, studies were designed to test the hypothesis that aggressively keeping glucose levels quite low in critically ill and postoperative diabetic patients would be beneficial. Instead, most of these studies found that overly tight glucose control in these settings led to untoward outcomes—and not only as the result of hypoglycemic episodes. Aiming for a modest serum glucose target of 150 to 200 mg/dL can significantly reduce the postoperative death rate, but the beneficial reduction is no greater if the target is less than 150 mg/dL.
With a looser glucose target, pre- and perioperative management of insulin-dependent diabetic patients can be simplified. Dobri and Lansang discuss the key practical principles of managing insulin before the patient goes to the operating suite. They emphasize relevant pearls of insulin physiology and discuss several scenarios we often encounter.
In fact, the principles they review are equally useful to remember when we admit diabetic patients to the hospital with orders to keep them “npo” while planning and awaiting tests or other procedures. A key take-home point is that severely insulinopenic patients require some exogenous basal insulin, even when not eating.
Over the past decade, recommendations about the ideal glucose target in hospitalized diabetic patients have fluctuated. The controversy has extended to diabetic patients in various types of intensive care units and to those headed to the operating room. Although proposals exist on how to manage diabetes in the operating room, including intraoperative insulin infusions, outcomes probably depend more on how glucose is managed during the patient’s postoperative stay in the hospital. For patients who are less critically ill and less medically complex, continuous insulin infusions are used infrequently, and insulin is often prescribed by algorithm or, archaically, by some form of “catch-up” sliding scale. Studies indicate that even the fairly loose glucose target of 70 to 180 mg/dL is achieved consistently in only a few patients.1
In view of a number of observations, including the link between hyperglycemia and postoperative wound infections, studies were designed to test the hypothesis that aggressively keeping glucose levels quite low in critically ill and postoperative diabetic patients would be beneficial. Instead, most of these studies found that overly tight glucose control in these settings led to untoward outcomes—and not only as the result of hypoglycemic episodes. Aiming for a modest serum glucose target of 150 to 200 mg/dL can significantly reduce the postoperative death rate, but the beneficial reduction is no greater if the target is less than 150 mg/dL.
With a looser glucose target, pre- and perioperative management of insulin-dependent diabetic patients can be simplified. Dobri and Lansang discuss the key practical principles of managing insulin before the patient goes to the operating suite. They emphasize relevant pearls of insulin physiology and discuss several scenarios we often encounter.
In fact, the principles they review are equally useful to remember when we admit diabetic patients to the hospital with orders to keep them “npo” while planning and awaiting tests or other procedures. A key take-home point is that severely insulinopenic patients require some exogenous basal insulin, even when not eating.
- Lopes R, Albrecht A, Williams J, et al. Postoperative glucose control following coronary artery bypass graft surgery: predictors and clinical outcomes. J Am Coll Cardiol 2013; 61:e1601.
- Lopes R, Albrecht A, Williams J, et al. Postoperative glucose control following coronary artery bypass graft surgery: predictors and clinical outcomes. J Am Coll Cardiol 2013; 61:e1601.
An uncommon syndrome makes us reflect on our approach to diagnosis
In this issue of the Journal, Dr. Soumya Chatterjee and colleagues discuss the antisynthetase syndrome. Although uncommon, this syndrome is important for internists and subspecialists to be aware of. Patients present in several different ways, and potentially life-threatening organ involvement may initially not be recognized or may not be linked with other components of the syndrome, such as involvement of the lungs, muscles, heart, and esophagus and fever.
I am currently on our inpatient rheumatology consultation service, and so I am reminded daily of the challenges hospitalists and subspecialists confront in ordering tests while trying to balance limiting length of stay with cost-efficiency and the desire to obtain a correct diagnosis. And I am repeatedly sensitized to several common test-ordering pitfalls intrinsic to the evaluation of patients with multisystem disease, including myositis. Most have a shared theme—limited time is spent in thoughtful reflection before ordering.
Patients with myositis rarely present with the textbook description of proximal muscle weakness. They describe fatigue, malaise, and sometimes a generalized sense of weakness. It is the probing questioning of their functional capacity and focused examination that reveal that the weakness is characterized by difficulty getting up off the floor, out of a low chair, or off the toilet. Then, with further questioning, some patients note that their fatigue and tiredness may also include getting winded easily with exertion, such as when climbing stairs, thus raising the question of cardiac dysfunction, pulmonary hypertension, or interstitial lung disease.
The responses to those probing questions and the subsequent examination should transform the interpretation of elevated aminotransferase levels (“liver tests”: AST and ALT) from liver disease into suspicion of muscle disease and the appropriate ordering of the creatine kinase level (avoiding liver imaging and hepatology consultation). The carefully repeated and now focused neurologic examination distinguishes the initial “poor cooperation” from the proximal weakness of myopathy. The probing interview leads to the performance of a focused physical examination that frames the appropriate interpretation of the routinely obtained “admission lab studies”!
The thoughtful history and examination are the basic stuff of clinical medicine that can easily be pushed aside by any of us as we deal with the tensions of high-volume, “high-throughput” medical care. It is a low-resistance path from hearing the symptom of fatigue with elevated “liver enzymes” to immediately checking ferritin, ceruloplasmin, and a hepatitis screen in preparation for getting a liver biopsy. It is easy to go through the motions without reflection. Easy, but sometimes wrong. And it is just as easy (but likely to be costly and unhelpful) to identify a patient prematurely with “possible autoimmune disease” and to immediately order a panoply of antinuclear and autoimmune serologies, including the Jo-1 autoantibody test.
As Dr. Chatterjee et al point out, we must continuously reflect on our diagnoses, for even after we navigate the pitfalls and avoid missing the diagnosis of myositis, if we don’t continuously assess all the patient’s symptoms, repeat the examination in a directed manner, and then look for circulating Jo-1 antibody when appropriate, we may well miss the opportunity to recognize that our patient’s ongoing fatigue with exertion is a reflection of the well-described association of myositis with interstitial lung disease (which may warrant a change in therapy), and not steroid myopathy or just poor conditioning.
Alternatively, in evaluating a patient who describes a year of feeling tired, suffering generalized muscle pains with low-grade fevers with temperatures of 99.8°F, and total exhaustion for 3 days after cleaning the oven, testing for antinuclear antibodies, extractable nuclear antigen antibodies, and a “vasculitis panel” in anticipation of a rheumatology consultation is not likely to be useful therapeutically or diagnostically.
Despite the daily pressures, we need to keep ourselves grounded in the fundamentals of clinical care: careful listening, purposeful examination, and directed use of laboratory tests and imaging. The downstream consequences of ordering tests for the sake of efficient throughput are quite real, and thoughtful test ordering is one step toward quality care, as well as cost-effective care.
In future months, the Journal will delve more deeply into test ordering when, in a joint effort with the American College of Physicians, we will be discussing the use and misuse of specific tests.
In this issue of the Journal, Dr. Soumya Chatterjee and colleagues discuss the antisynthetase syndrome. Although uncommon, this syndrome is important for internists and subspecialists to be aware of. Patients present in several different ways, and potentially life-threatening organ involvement may initially not be recognized or may not be linked with other components of the syndrome, such as involvement of the lungs, muscles, heart, and esophagus and fever.
I am currently on our inpatient rheumatology consultation service, and so I am reminded daily of the challenges hospitalists and subspecialists confront in ordering tests while trying to balance limiting length of stay with cost-efficiency and the desire to obtain a correct diagnosis. And I am repeatedly sensitized to several common test-ordering pitfalls intrinsic to the evaluation of patients with multisystem disease, including myositis. Most have a shared theme—limited time is spent in thoughtful reflection before ordering.
Patients with myositis rarely present with the textbook description of proximal muscle weakness. They describe fatigue, malaise, and sometimes a generalized sense of weakness. It is the probing questioning of their functional capacity and focused examination that reveal that the weakness is characterized by difficulty getting up off the floor, out of a low chair, or off the toilet. Then, with further questioning, some patients note that their fatigue and tiredness may also include getting winded easily with exertion, such as when climbing stairs, thus raising the question of cardiac dysfunction, pulmonary hypertension, or interstitial lung disease.
The responses to those probing questions and the subsequent examination should transform the interpretation of elevated aminotransferase levels (“liver tests”: AST and ALT) from liver disease into suspicion of muscle disease and the appropriate ordering of the creatine kinase level (avoiding liver imaging and hepatology consultation). The carefully repeated and now focused neurologic examination distinguishes the initial “poor cooperation” from the proximal weakness of myopathy. The probing interview leads to the performance of a focused physical examination that frames the appropriate interpretation of the routinely obtained “admission lab studies”!
The thoughtful history and examination are the basic stuff of clinical medicine that can easily be pushed aside by any of us as we deal with the tensions of high-volume, “high-throughput” medical care. It is a low-resistance path from hearing the symptom of fatigue with elevated “liver enzymes” to immediately checking ferritin, ceruloplasmin, and a hepatitis screen in preparation for getting a liver biopsy. It is easy to go through the motions without reflection. Easy, but sometimes wrong. And it is just as easy (but likely to be costly and unhelpful) to identify a patient prematurely with “possible autoimmune disease” and to immediately order a panoply of antinuclear and autoimmune serologies, including the Jo-1 autoantibody test.
As Dr. Chatterjee et al point out, we must continuously reflect on our diagnoses, for even after we navigate the pitfalls and avoid missing the diagnosis of myositis, if we don’t continuously assess all the patient’s symptoms, repeat the examination in a directed manner, and then look for circulating Jo-1 antibody when appropriate, we may well miss the opportunity to recognize that our patient’s ongoing fatigue with exertion is a reflection of the well-described association of myositis with interstitial lung disease (which may warrant a change in therapy), and not steroid myopathy or just poor conditioning.
Alternatively, in evaluating a patient who describes a year of feeling tired, suffering generalized muscle pains with low-grade fevers with temperatures of 99.8°F, and total exhaustion for 3 days after cleaning the oven, testing for antinuclear antibodies, extractable nuclear antigen antibodies, and a “vasculitis panel” in anticipation of a rheumatology consultation is not likely to be useful therapeutically or diagnostically.
Despite the daily pressures, we need to keep ourselves grounded in the fundamentals of clinical care: careful listening, purposeful examination, and directed use of laboratory tests and imaging. The downstream consequences of ordering tests for the sake of efficient throughput are quite real, and thoughtful test ordering is one step toward quality care, as well as cost-effective care.
In future months, the Journal will delve more deeply into test ordering when, in a joint effort with the American College of Physicians, we will be discussing the use and misuse of specific tests.
In this issue of the Journal, Dr. Soumya Chatterjee and colleagues discuss the antisynthetase syndrome. Although uncommon, this syndrome is important for internists and subspecialists to be aware of. Patients present in several different ways, and potentially life-threatening organ involvement may initially not be recognized or may not be linked with other components of the syndrome, such as involvement of the lungs, muscles, heart, and esophagus and fever.
I am currently on our inpatient rheumatology consultation service, and so I am reminded daily of the challenges hospitalists and subspecialists confront in ordering tests while trying to balance limiting length of stay with cost-efficiency and the desire to obtain a correct diagnosis. And I am repeatedly sensitized to several common test-ordering pitfalls intrinsic to the evaluation of patients with multisystem disease, including myositis. Most have a shared theme—limited time is spent in thoughtful reflection before ordering.
Patients with myositis rarely present with the textbook description of proximal muscle weakness. They describe fatigue, malaise, and sometimes a generalized sense of weakness. It is the probing questioning of their functional capacity and focused examination that reveal that the weakness is characterized by difficulty getting up off the floor, out of a low chair, or off the toilet. Then, with further questioning, some patients note that their fatigue and tiredness may also include getting winded easily with exertion, such as when climbing stairs, thus raising the question of cardiac dysfunction, pulmonary hypertension, or interstitial lung disease.
The responses to those probing questions and the subsequent examination should transform the interpretation of elevated aminotransferase levels (“liver tests”: AST and ALT) from liver disease into suspicion of muscle disease and the appropriate ordering of the creatine kinase level (avoiding liver imaging and hepatology consultation). The carefully repeated and now focused neurologic examination distinguishes the initial “poor cooperation” from the proximal weakness of myopathy. The probing interview leads to the performance of a focused physical examination that frames the appropriate interpretation of the routinely obtained “admission lab studies”!
The thoughtful history and examination are the basic stuff of clinical medicine that can easily be pushed aside by any of us as we deal with the tensions of high-volume, “high-throughput” medical care. It is a low-resistance path from hearing the symptom of fatigue with elevated “liver enzymes” to immediately checking ferritin, ceruloplasmin, and a hepatitis screen in preparation for getting a liver biopsy. It is easy to go through the motions without reflection. Easy, but sometimes wrong. And it is just as easy (but likely to be costly and unhelpful) to identify a patient prematurely with “possible autoimmune disease” and to immediately order a panoply of antinuclear and autoimmune serologies, including the Jo-1 autoantibody test.
As Dr. Chatterjee et al point out, we must continuously reflect on our diagnoses, for even after we navigate the pitfalls and avoid missing the diagnosis of myositis, if we don’t continuously assess all the patient’s symptoms, repeat the examination in a directed manner, and then look for circulating Jo-1 antibody when appropriate, we may well miss the opportunity to recognize that our patient’s ongoing fatigue with exertion is a reflection of the well-described association of myositis with interstitial lung disease (which may warrant a change in therapy), and not steroid myopathy or just poor conditioning.
Alternatively, in evaluating a patient who describes a year of feeling tired, suffering generalized muscle pains with low-grade fevers with temperatures of 99.8°F, and total exhaustion for 3 days after cleaning the oven, testing for antinuclear antibodies, extractable nuclear antigen antibodies, and a “vasculitis panel” in anticipation of a rheumatology consultation is not likely to be useful therapeutically or diagnostically.
Despite the daily pressures, we need to keep ourselves grounded in the fundamentals of clinical care: careful listening, purposeful examination, and directed use of laboratory tests and imaging. The downstream consequences of ordering tests for the sake of efficient throughput are quite real, and thoughtful test ordering is one step toward quality care, as well as cost-effective care.
In future months, the Journal will delve more deeply into test ordering when, in a joint effort with the American College of Physicians, we will be discussing the use and misuse of specific tests.
It is not a ‘mini’-stroke, it is a call to action
When a patient tells a physician about a sudden episode of weakness, loss of vision, or loss of sensation that occurred but then quickly resolved, both the patient and the physician may feel a sense of relief. In many cases, the patient may not even seek medical evaluation. These events, when vascular in origin and not seizures or migraines, have been termed transient ischemic attacks (TIAs) by physicians, and are often called “mini-strokes” by patients. But as discussed by Drs. Shruti Sonni and David Thaler in this issue of the Journal, there is nothing “mini” about their significance.
In some ways, the perception of TIA (as opposed to stroke) has paralleled our understanding and initial misperception of non-ST-segment elevation myocardial infarction (NSTEMI). This type of acute coronary event was thought to be less severe than acute ST-elevation MI (STEMI), and patients with NSTEMI and unstable angina have historically not received the aggressive acute and preventive therapy received by patients with STEMI. But with the advent of more sensitive markers of myocardial necrosis, we now know that NSTEMI and unstable angina can be associated with significant tissue injury, and that the outcome after a year or so can be the same as or worse than if the initial injury was associated with ST-segment elevation.
A similar story has evolved with TIA. With sensitive diffusion-weighted magnetic resonance imaging, brain injury can often be detected even when it is not seen on computed tomography. Patients are often not evaluated as completely for reversible vascular lesions and may not receive aggressive secondary prevention. Yet shortly after suffering a TIA, a patient is even more likely to have another neurologic event than if the initial event had been a small stroke. And the neurologic event will more likely be a stroke with residual neurologic deficit.
All are reasons to educate our older patients—particularly those with diabetes, atrial fibrillation, peripheral vascular disease, and hypertension—about the significance of even apparently self-limited neurologic events. A TIA is a major warning signal.
When a patient tells a physician about a sudden episode of weakness, loss of vision, or loss of sensation that occurred but then quickly resolved, both the patient and the physician may feel a sense of relief. In many cases, the patient may not even seek medical evaluation. These events, when vascular in origin and not seizures or migraines, have been termed transient ischemic attacks (TIAs) by physicians, and are often called “mini-strokes” by patients. But as discussed by Drs. Shruti Sonni and David Thaler in this issue of the Journal, there is nothing “mini” about their significance.
In some ways, the perception of TIA (as opposed to stroke) has paralleled our understanding and initial misperception of non-ST-segment elevation myocardial infarction (NSTEMI). This type of acute coronary event was thought to be less severe than acute ST-elevation MI (STEMI), and patients with NSTEMI and unstable angina have historically not received the aggressive acute and preventive therapy received by patients with STEMI. But with the advent of more sensitive markers of myocardial necrosis, we now know that NSTEMI and unstable angina can be associated with significant tissue injury, and that the outcome after a year or so can be the same as or worse than if the initial injury was associated with ST-segment elevation.
A similar story has evolved with TIA. With sensitive diffusion-weighted magnetic resonance imaging, brain injury can often be detected even when it is not seen on computed tomography. Patients are often not evaluated as completely for reversible vascular lesions and may not receive aggressive secondary prevention. Yet shortly after suffering a TIA, a patient is even more likely to have another neurologic event than if the initial event had been a small stroke. And the neurologic event will more likely be a stroke with residual neurologic deficit.
All are reasons to educate our older patients—particularly those with diabetes, atrial fibrillation, peripheral vascular disease, and hypertension—about the significance of even apparently self-limited neurologic events. A TIA is a major warning signal.
When a patient tells a physician about a sudden episode of weakness, loss of vision, or loss of sensation that occurred but then quickly resolved, both the patient and the physician may feel a sense of relief. In many cases, the patient may not even seek medical evaluation. These events, when vascular in origin and not seizures or migraines, have been termed transient ischemic attacks (TIAs) by physicians, and are often called “mini-strokes” by patients. But as discussed by Drs. Shruti Sonni and David Thaler in this issue of the Journal, there is nothing “mini” about their significance.
In some ways, the perception of TIA (as opposed to stroke) has paralleled our understanding and initial misperception of non-ST-segment elevation myocardial infarction (NSTEMI). This type of acute coronary event was thought to be less severe than acute ST-elevation MI (STEMI), and patients with NSTEMI and unstable angina have historically not received the aggressive acute and preventive therapy received by patients with STEMI. But with the advent of more sensitive markers of myocardial necrosis, we now know that NSTEMI and unstable angina can be associated with significant tissue injury, and that the outcome after a year or so can be the same as or worse than if the initial injury was associated with ST-segment elevation.
A similar story has evolved with TIA. With sensitive diffusion-weighted magnetic resonance imaging, brain injury can often be detected even when it is not seen on computed tomography. Patients are often not evaluated as completely for reversible vascular lesions and may not receive aggressive secondary prevention. Yet shortly after suffering a TIA, a patient is even more likely to have another neurologic event than if the initial event had been a small stroke. And the neurologic event will more likely be a stroke with residual neurologic deficit.
All are reasons to educate our older patients—particularly those with diabetes, atrial fibrillation, peripheral vascular disease, and hypertension—about the significance of even apparently self-limited neurologic events. A TIA is a major warning signal.
The pipe and the plug: Is unblocking arteries enough?
It seems anachronistic that we still debate how best to fix the plumbing of clogged arteries. Our understanding of the pathogenesis of acute coronary syndromes has evolved in leaps and bounds since the first attempts at coronary revascularization. And yet, as Aggarwal et al discuss in their analysis of the FREEDOM trial,1 practical and technical questions about how best to open coronary blockages remain clinically relevant, even as we develop strategies to reverse the atherosclerotic processes that created those blockages.
Many acute coronary events arise not from coronary stenoses but from unstable, vulnerable plaques, which may be a distance away from the stable stenoses and thus undetectable. These unstable plaques, embedded within the remodeled arterial wall and without a protective fibrous cap, may rupture and cause an acute thrombotic occlusion. Statins, aspirin, and perhaps anti-inflammatory drugs (now including colchicine) decrease acute coronary events, likely by interfering with the chain of events initiated by plaque rupture.
So why should coronary artery bypass grafting (CABG) be superior to drug-eluting stents (with antiplatelet therapy) in some diabetic patients, as the FREEDOM trial1 found?
Stenting and balloon dilation repair discrete areas of critical narrowing presumed to be contributing to downstream myocardial ischemia. But areas of vulnerable, non-calcified plaque (with outward remodeling of the vessel wall but generally preserved lumen integrity) may be geographically separated from the identified stenosis and thus be left untreated by stenting. On the other hand, CABG may circumvent “silent” areas of nascent vulnerable plaque that, if left in place, might later rupture and cause acute syndromes or death.
This explanation is clearly hypothetical and one of many possibilities. But paying attention to the new biology of the atherosclerotic process should lead us all to be more aggressive in using treatments shown to reduce the progression of coronary artery disease and the occurrence of acute coronary syndromes. This is especially true in patients with diabetes who are known to have diffuse coronary involvement. So even as we more fully recognize the value of CABG in these patients, perhaps if we intervene earlier—with statins, hypertension control, improved diet, smoking cessation, prevention of chronic kidney disease, antiplatelet therapy, and anti-inflammatory therapy—we will not need it.
- Farkouh ME, Domanski M, Sleeper LA, et al. Strategies for multivessel revascularization in patients with diabetes. N Engl J Med 2012; 367:2375–2384.
It seems anachronistic that we still debate how best to fix the plumbing of clogged arteries. Our understanding of the pathogenesis of acute coronary syndromes has evolved in leaps and bounds since the first attempts at coronary revascularization. And yet, as Aggarwal et al discuss in their analysis of the FREEDOM trial,1 practical and technical questions about how best to open coronary blockages remain clinically relevant, even as we develop strategies to reverse the atherosclerotic processes that created those blockages.
Many acute coronary events arise not from coronary stenoses but from unstable, vulnerable plaques, which may be a distance away from the stable stenoses and thus undetectable. These unstable plaques, embedded within the remodeled arterial wall and without a protective fibrous cap, may rupture and cause an acute thrombotic occlusion. Statins, aspirin, and perhaps anti-inflammatory drugs (now including colchicine) decrease acute coronary events, likely by interfering with the chain of events initiated by plaque rupture.
So why should coronary artery bypass grafting (CABG) be superior to drug-eluting stents (with antiplatelet therapy) in some diabetic patients, as the FREEDOM trial1 found?
Stenting and balloon dilation repair discrete areas of critical narrowing presumed to be contributing to downstream myocardial ischemia. But areas of vulnerable, non-calcified plaque (with outward remodeling of the vessel wall but generally preserved lumen integrity) may be geographically separated from the identified stenosis and thus be left untreated by stenting. On the other hand, CABG may circumvent “silent” areas of nascent vulnerable plaque that, if left in place, might later rupture and cause acute syndromes or death.
This explanation is clearly hypothetical and one of many possibilities. But paying attention to the new biology of the atherosclerotic process should lead us all to be more aggressive in using treatments shown to reduce the progression of coronary artery disease and the occurrence of acute coronary syndromes. This is especially true in patients with diabetes who are known to have diffuse coronary involvement. So even as we more fully recognize the value of CABG in these patients, perhaps if we intervene earlier—with statins, hypertension control, improved diet, smoking cessation, prevention of chronic kidney disease, antiplatelet therapy, and anti-inflammatory therapy—we will not need it.
It seems anachronistic that we still debate how best to fix the plumbing of clogged arteries. Our understanding of the pathogenesis of acute coronary syndromes has evolved in leaps and bounds since the first attempts at coronary revascularization. And yet, as Aggarwal et al discuss in their analysis of the FREEDOM trial,1 practical and technical questions about how best to open coronary blockages remain clinically relevant, even as we develop strategies to reverse the atherosclerotic processes that created those blockages.
Many acute coronary events arise not from coronary stenoses but from unstable, vulnerable plaques, which may be a distance away from the stable stenoses and thus undetectable. These unstable plaques, embedded within the remodeled arterial wall and without a protective fibrous cap, may rupture and cause an acute thrombotic occlusion. Statins, aspirin, and perhaps anti-inflammatory drugs (now including colchicine) decrease acute coronary events, likely by interfering with the chain of events initiated by plaque rupture.
So why should coronary artery bypass grafting (CABG) be superior to drug-eluting stents (with antiplatelet therapy) in some diabetic patients, as the FREEDOM trial1 found?
Stenting and balloon dilation repair discrete areas of critical narrowing presumed to be contributing to downstream myocardial ischemia. But areas of vulnerable, non-calcified plaque (with outward remodeling of the vessel wall but generally preserved lumen integrity) may be geographically separated from the identified stenosis and thus be left untreated by stenting. On the other hand, CABG may circumvent “silent” areas of nascent vulnerable plaque that, if left in place, might later rupture and cause acute syndromes or death.
This explanation is clearly hypothetical and one of many possibilities. But paying attention to the new biology of the atherosclerotic process should lead us all to be more aggressive in using treatments shown to reduce the progression of coronary artery disease and the occurrence of acute coronary syndromes. This is especially true in patients with diabetes who are known to have diffuse coronary involvement. So even as we more fully recognize the value of CABG in these patients, perhaps if we intervene earlier—with statins, hypertension control, improved diet, smoking cessation, prevention of chronic kidney disease, antiplatelet therapy, and anti-inflammatory therapy—we will not need it.
- Farkouh ME, Domanski M, Sleeper LA, et al. Strategies for multivessel revascularization in patients with diabetes. N Engl J Med 2012; 367:2375–2384.
- Farkouh ME, Domanski M, Sleeper LA, et al. Strategies for multivessel revascularization in patients with diabetes. N Engl J Med 2012; 367:2375–2384.
The electronic health record: Getting more bang for the click
The promise of the electronic health record (EHR) has not yet been realized. I find it extremely beneficial to have access to shared, accurate information during each patient encounter, but my expectations are still far ahead of reality. We should demand more-flexible software with more clinician-tailored utilities—more bang for the click. However, we users also need to improve.
Benefits and challenges of computers in the examination room
With the EHR, the monitor and keyboard have been interposed between the physician and patient. Physicians now must type or dictate their office notes, enter electronic orders and prescriptions, and remember to use specific phrases to fulfill compliance regulations. Many physicians have to see more patients in less time while incorporating the EHR into each visit. Under these new pressures, some have chosen to retire early or to drastically change the scope of their practice.
I too experience these challenges. I have more electronic tasks to do during each visit and wonder if this is really the best use of my time. I run even further behind than I used to, and I almost uniformly have to apologize to my patients for being late. I am not the world’s best typist. Patients note my clerical challenges, and some of them offer to type in their information for me—a bonding experience I could do without.
Lest the computer become the primary object of my attention, I push back from the keyboard intermittently, with my hands in my lap, or make physical contact with my (human) patient. I try to make eye contact as we converse, and patients leave with a legible—albeit possibly misspelled—summary. During visits, I can share graphs of my patient’s lab tests or vital signs over time, and I hope that more sophisticated EHRs will correlate this information with medication changes and other events. I have less work to do at the end of the day than I used to, since during my clinic time, multitasking as I go, I send prescriptions to pharmacies, review test results, and send letters to my patients and their referring physicians about their test results and my suggestions. I encourage patients to e-mail me directly with their questions or problems as they arise—an opportunity that many have used and none have abused. Technology is not all bad.
How the EHR needs to improve
The EHR is still evolving, and it needs to be better honed to the needs of the user. My EHR still does not give me reminders for routine screening and monitoring. It is not yet tailored to the specific problems shared by many of my patients. It does not yet provide snapshots or specifics about tailored measures of quality of my practice.
As nicely summarized by Dr. William Morris in this issue, we need to get the EHR to work for us, not mainly for those responsible for billing and regulatory compliance. But all groups can be served equally; “alerts” can be activated as screen pop-ups to drive physician behavior towards best practice—with the caveat that alerts must be meaningful, triggered intelligently, and individualized to avoid pop-up fatigue.
In addition, as Dr. James Stoller discusses in this issue, the solitary work involved in using the EHR has also affected the natural collegial interchange that took place around the chart rack in the past. He, Dr. Morris, and I agree that direct physician-physician communication has diminished in our medical centers. But I believe that this is the result of many pressures, not simply the renewed emphasis1 on the physician’s role as scribe and more-cloistered physician keyboarding. We all extol the value of the phone call and face-to-face conversation between consultants and primary care providers, and at times this is necessary to reach decisions of care. But physicians are more strapped for time than ever. In this era of the “flash mob” and instant texting and tweeting, we should be able to promote effective digital dialogue between physicians. We should embrace and facilitate digital communication.
How physicians need to improve
I see many copy-and-paste reiterations of semi-irrelevant (and I suspect, usually not independently confirmed) details of social history and physical examinations from visits gone by. I read completed templates with information that clearly was not collected at the time of the encounter. The potential for misuse and misrepresentation (even without any malevolent intent) with the use of templates and copy-and-paste functions is apparent. These bad practices must stop.
Another problem: some of my colleagues do not read their messages regarding forwarded charts or patient questions within our EHR—“It is just too many e-mails to check.” This reluctance to fully connect in cyberspace is perhaps a case of failing to teach old dogs new tricks, and we do have too much e-mail. But I think it is also partly a result of paranoia over maintaining confidentiality of patient-related communication, at the expense of the efficiency of digital communication. The forwarding of EHR messages to our office e-mail system and phones is blocked by a firewall to ensure privacy—but this makes necessary medical communication more difficult. Is this the right trade-off? If the EHR is to become the hub for tracking patient-centered care, we need to use it to our advantage and to ease access to all aspects of the EHR from multiple venues.
Even when read, our notes leave much to be desired. Beyond the problem with copying and pasting of earlier notes, paragraphs of unfiltered, often irrelevant or untimely lab and imaging reports are repeatedly inserted into multiple notes, while a clearly expressed impression and plan are often nowhere to be found. Some of my colleagues dictate their notes with a delay before uploading, without any concise placeholder summary in the EHR, or they have an assistant or trainee enter a summary, without the nuanced explanation that I need to fully understand the consultant’s reasoning. These behaviors negate the potential power of the EHR.
Bemoaning the new technology and developing work-arounds is not the answer. We need to refine the clinician-computer interface,2 and we need to do much better with our documentation.
The basic principles of physician communication are as important now as they were 50 years ago, when notes were illegibly written with pen and paper and discussed by docs seated around the chart rack in the nursing station. We need to take ownership of the EHR and to insist with other stakeholders that all aspects work better for us and for our patients. This includes the software and, maybe more important, the user.
- Siegler EL. The evolving medical record. Ann Intern Med 2010; 153:671–677.
- Cimino JJ. Improving the electronic health record—are clinicians getting what they wished for? JAMA 2013; 309:991–992.
The promise of the electronic health record (EHR) has not yet been realized. I find it extremely beneficial to have access to shared, accurate information during each patient encounter, but my expectations are still far ahead of reality. We should demand more-flexible software with more clinician-tailored utilities—more bang for the click. However, we users also need to improve.
Benefits and challenges of computers in the examination room
With the EHR, the monitor and keyboard have been interposed between the physician and patient. Physicians now must type or dictate their office notes, enter electronic orders and prescriptions, and remember to use specific phrases to fulfill compliance regulations. Many physicians have to see more patients in less time while incorporating the EHR into each visit. Under these new pressures, some have chosen to retire early or to drastically change the scope of their practice.
I too experience these challenges. I have more electronic tasks to do during each visit and wonder if this is really the best use of my time. I run even further behind than I used to, and I almost uniformly have to apologize to my patients for being late. I am not the world’s best typist. Patients note my clerical challenges, and some of them offer to type in their information for me—a bonding experience I could do without.
Lest the computer become the primary object of my attention, I push back from the keyboard intermittently, with my hands in my lap, or make physical contact with my (human) patient. I try to make eye contact as we converse, and patients leave with a legible—albeit possibly misspelled—summary. During visits, I can share graphs of my patient’s lab tests or vital signs over time, and I hope that more sophisticated EHRs will correlate this information with medication changes and other events. I have less work to do at the end of the day than I used to, since during my clinic time, multitasking as I go, I send prescriptions to pharmacies, review test results, and send letters to my patients and their referring physicians about their test results and my suggestions. I encourage patients to e-mail me directly with their questions or problems as they arise—an opportunity that many have used and none have abused. Technology is not all bad.
How the EHR needs to improve
The EHR is still evolving, and it needs to be better honed to the needs of the user. My EHR still does not give me reminders for routine screening and monitoring. It is not yet tailored to the specific problems shared by many of my patients. It does not yet provide snapshots or specifics about tailored measures of quality of my practice.
As nicely summarized by Dr. William Morris in this issue, we need to get the EHR to work for us, not mainly for those responsible for billing and regulatory compliance. But all groups can be served equally; “alerts” can be activated as screen pop-ups to drive physician behavior towards best practice—with the caveat that alerts must be meaningful, triggered intelligently, and individualized to avoid pop-up fatigue.
In addition, as Dr. James Stoller discusses in this issue, the solitary work involved in using the EHR has also affected the natural collegial interchange that took place around the chart rack in the past. He, Dr. Morris, and I agree that direct physician-physician communication has diminished in our medical centers. But I believe that this is the result of many pressures, not simply the renewed emphasis1 on the physician’s role as scribe and more-cloistered physician keyboarding. We all extol the value of the phone call and face-to-face conversation between consultants and primary care providers, and at times this is necessary to reach decisions of care. But physicians are more strapped for time than ever. In this era of the “flash mob” and instant texting and tweeting, we should be able to promote effective digital dialogue between physicians. We should embrace and facilitate digital communication.
How physicians need to improve
I see many copy-and-paste reiterations of semi-irrelevant (and I suspect, usually not independently confirmed) details of social history and physical examinations from visits gone by. I read completed templates with information that clearly was not collected at the time of the encounter. The potential for misuse and misrepresentation (even without any malevolent intent) with the use of templates and copy-and-paste functions is apparent. These bad practices must stop.
Another problem: some of my colleagues do not read their messages regarding forwarded charts or patient questions within our EHR—“It is just too many e-mails to check.” This reluctance to fully connect in cyberspace is perhaps a case of failing to teach old dogs new tricks, and we do have too much e-mail. But I think it is also partly a result of paranoia over maintaining confidentiality of patient-related communication, at the expense of the efficiency of digital communication. The forwarding of EHR messages to our office e-mail system and phones is blocked by a firewall to ensure privacy—but this makes necessary medical communication more difficult. Is this the right trade-off? If the EHR is to become the hub for tracking patient-centered care, we need to use it to our advantage and to ease access to all aspects of the EHR from multiple venues.
Even when read, our notes leave much to be desired. Beyond the problem with copying and pasting of earlier notes, paragraphs of unfiltered, often irrelevant or untimely lab and imaging reports are repeatedly inserted into multiple notes, while a clearly expressed impression and plan are often nowhere to be found. Some of my colleagues dictate their notes with a delay before uploading, without any concise placeholder summary in the EHR, or they have an assistant or trainee enter a summary, without the nuanced explanation that I need to fully understand the consultant’s reasoning. These behaviors negate the potential power of the EHR.
Bemoaning the new technology and developing work-arounds is not the answer. We need to refine the clinician-computer interface,2 and we need to do much better with our documentation.
The basic principles of physician communication are as important now as they were 50 years ago, when notes were illegibly written with pen and paper and discussed by docs seated around the chart rack in the nursing station. We need to take ownership of the EHR and to insist with other stakeholders that all aspects work better for us and for our patients. This includes the software and, maybe more important, the user.
The promise of the electronic health record (EHR) has not yet been realized. I find it extremely beneficial to have access to shared, accurate information during each patient encounter, but my expectations are still far ahead of reality. We should demand more-flexible software with more clinician-tailored utilities—more bang for the click. However, we users also need to improve.
Benefits and challenges of computers in the examination room
With the EHR, the monitor and keyboard have been interposed between the physician and patient. Physicians now must type or dictate their office notes, enter electronic orders and prescriptions, and remember to use specific phrases to fulfill compliance regulations. Many physicians have to see more patients in less time while incorporating the EHR into each visit. Under these new pressures, some have chosen to retire early or to drastically change the scope of their practice.
I too experience these challenges. I have more electronic tasks to do during each visit and wonder if this is really the best use of my time. I run even further behind than I used to, and I almost uniformly have to apologize to my patients for being late. I am not the world’s best typist. Patients note my clerical challenges, and some of them offer to type in their information for me—a bonding experience I could do without.
Lest the computer become the primary object of my attention, I push back from the keyboard intermittently, with my hands in my lap, or make physical contact with my (human) patient. I try to make eye contact as we converse, and patients leave with a legible—albeit possibly misspelled—summary. During visits, I can share graphs of my patient’s lab tests or vital signs over time, and I hope that more sophisticated EHRs will correlate this information with medication changes and other events. I have less work to do at the end of the day than I used to, since during my clinic time, multitasking as I go, I send prescriptions to pharmacies, review test results, and send letters to my patients and their referring physicians about their test results and my suggestions. I encourage patients to e-mail me directly with their questions or problems as they arise—an opportunity that many have used and none have abused. Technology is not all bad.
How the EHR needs to improve
The EHR is still evolving, and it needs to be better honed to the needs of the user. My EHR still does not give me reminders for routine screening and monitoring. It is not yet tailored to the specific problems shared by many of my patients. It does not yet provide snapshots or specifics about tailored measures of quality of my practice.
As nicely summarized by Dr. William Morris in this issue, we need to get the EHR to work for us, not mainly for those responsible for billing and regulatory compliance. But all groups can be served equally; “alerts” can be activated as screen pop-ups to drive physician behavior towards best practice—with the caveat that alerts must be meaningful, triggered intelligently, and individualized to avoid pop-up fatigue.
In addition, as Dr. James Stoller discusses in this issue, the solitary work involved in using the EHR has also affected the natural collegial interchange that took place around the chart rack in the past. He, Dr. Morris, and I agree that direct physician-physician communication has diminished in our medical centers. But I believe that this is the result of many pressures, not simply the renewed emphasis1 on the physician’s role as scribe and more-cloistered physician keyboarding. We all extol the value of the phone call and face-to-face conversation between consultants and primary care providers, and at times this is necessary to reach decisions of care. But physicians are more strapped for time than ever. In this era of the “flash mob” and instant texting and tweeting, we should be able to promote effective digital dialogue between physicians. We should embrace and facilitate digital communication.
How physicians need to improve
I see many copy-and-paste reiterations of semi-irrelevant (and I suspect, usually not independently confirmed) details of social history and physical examinations from visits gone by. I read completed templates with information that clearly was not collected at the time of the encounter. The potential for misuse and misrepresentation (even without any malevolent intent) with the use of templates and copy-and-paste functions is apparent. These bad practices must stop.
Another problem: some of my colleagues do not read their messages regarding forwarded charts or patient questions within our EHR—“It is just too many e-mails to check.” This reluctance to fully connect in cyberspace is perhaps a case of failing to teach old dogs new tricks, and we do have too much e-mail. But I think it is also partly a result of paranoia over maintaining confidentiality of patient-related communication, at the expense of the efficiency of digital communication. The forwarding of EHR messages to our office e-mail system and phones is blocked by a firewall to ensure privacy—but this makes necessary medical communication more difficult. Is this the right trade-off? If the EHR is to become the hub for tracking patient-centered care, we need to use it to our advantage and to ease access to all aspects of the EHR from multiple venues.
Even when read, our notes leave much to be desired. Beyond the problem with copying and pasting of earlier notes, paragraphs of unfiltered, often irrelevant or untimely lab and imaging reports are repeatedly inserted into multiple notes, while a clearly expressed impression and plan are often nowhere to be found. Some of my colleagues dictate their notes with a delay before uploading, without any concise placeholder summary in the EHR, or they have an assistant or trainee enter a summary, without the nuanced explanation that I need to fully understand the consultant’s reasoning. These behaviors negate the potential power of the EHR.
Bemoaning the new technology and developing work-arounds is not the answer. We need to refine the clinician-computer interface,2 and we need to do much better with our documentation.
The basic principles of physician communication are as important now as they were 50 years ago, when notes were illegibly written with pen and paper and discussed by docs seated around the chart rack in the nursing station. We need to take ownership of the EHR and to insist with other stakeholders that all aspects work better for us and for our patients. This includes the software and, maybe more important, the user.
- Siegler EL. The evolving medical record. Ann Intern Med 2010; 153:671–677.
- Cimino JJ. Improving the electronic health record—are clinicians getting what they wished for? JAMA 2013; 309:991–992.
- Siegler EL. The evolving medical record. Ann Intern Med 2010; 153:671–677.
- Cimino JJ. Improving the electronic health record—are clinicians getting what they wished for? JAMA 2013; 309:991–992.
Guidelines or a plea for help?
The US Preventive Services Task Force (USPSTF) recently published a clinical guideline on the use of calcium and vitamin D supplements to prevent fractures in adults.1 This agency “strives to make accurate, up-to-date, and relevant recommendations about preventive services in primary care,”2 and within those parameters they generally succeed. But I am confused about the value of this specific guideline, and apparently I am not alone.
The task force came to several major conclusions about calcium and vitamin D supplementation to prevent fractures:
- There is insufficient evidence to offer guidance on supplementation in premeno-pausal women or in men
- One should not prescribe supplementation of 400 IU or less of vitamin D3 or 1 g or less of calcium in postmenopausal women
- The data are insufficient to assess the harm and benefit of higher doses of supplemental vitamin D or calcium.
The task force stuck to their rules and weighed the data within the constraints of the specific question they were charged to address.
A challenge to clinicians attempting to apply rigidly defined, evidence-based conclusions is that the more precisely a question is addressed, the more limited is the answer’s applicability in clinical practice. Thus, Dr. Robin Dore, in this issue of the Journal, says that she believes there are benefits of vitamin D and calcium supplementation beyond primary prevention of fractures, and the benefits are not negated by the magnitude of potential harm (stated to be “small” by the USPSTF).
We are bombarded by clinical practice guidelines, and we don’t know which will be externally imposed as a measure of quality by which our practice performance will be assessed. In the clinic, we encounter a series of individual patients with whom we make individual treatment decisions. Like the inhabitants of Lake Wobegon, few of our patients are the “average patient” as derived from cross-sectional studies. Some have occult celiac disease, others are on proton pump inhibitors, some are lactose-intolerant, and some are on intermittent prednisone. For these patients, should the USPSTF guidelines warrant the extra effort and time to individually document why the guidelines don’t fit and why we made the clinical judgment to not follow them? Additionally, how many patients in the clinical studies used by the USPSTF fit into these or other unique categories and may have thus contaminated the data? I don’t see in these guidelines recommendations on how best to assess calcium and vitamin D intake and absorption in our patients in a practical manner. After all, supplementation is in addition to the actual intake of dietary sources.
For me, further confusion stems from trying to clinically couple the logic of such carefully analyzed, accurately stated, and tightly focused guidelines with what we already know (and apparently don’t know). We know that severe vitamin D deficiency clearly causes low bone density and fractures from osteomalacia, and the Institute of Medicine has previously stated that adequate vitamin D is beneficial and so should be supplemented.3 Vitamin D deficiency is a continuum and is very unlikely to be defined by the quantity of supplementation. Additionally, the USPSTF has previously published guidelines on supplementing vitamin D intake to prevent falls—falls being a major preventable cause of primary fractures. There seems to be some conceptual incongruence between these guidelines.
While epidemiologic studies have incorporated estimates of dietary and supplemental intake of calcium and vitamin D, what likely really matters is the absorption and the achieved blood levels and tissue incorporation. As shown in the examples above, many variables influence these in individual patients. And most troublesome is that there is no agreement as to the appropriate target level for circulating vitamin D. I agree with two-thirds of the task force’s conclusions—we have insufficient evidence. Are these really guidelines, or a plea for the gathering of appropriate outcome data?
- Moyer VA, on behalf of the US Preventive Services Task Force. Vitamin D and calcium supplementation to prevent fractures in adults: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2013; E-pub ahead of print. http://annals.org/article.aspx?articleid=1655858. Accessed May 13, 2013.
- US Preventive Services Task Force. www.uspreventiveservicestaskforce.org. Accessed May 13, 2013.
- Standing Committee on the Scientific Evaluation of Dietary Reference Intakes. Food and Nutrition Board. Institute of Medicine. Dietary Reference Intakes on Calcium and Vitamin D. Washington, DC: The National Academic Press, 2010.
The US Preventive Services Task Force (USPSTF) recently published a clinical guideline on the use of calcium and vitamin D supplements to prevent fractures in adults.1 This agency “strives to make accurate, up-to-date, and relevant recommendations about preventive services in primary care,”2 and within those parameters they generally succeed. But I am confused about the value of this specific guideline, and apparently I am not alone.
The task force came to several major conclusions about calcium and vitamin D supplementation to prevent fractures:
- There is insufficient evidence to offer guidance on supplementation in premeno-pausal women or in men
- One should not prescribe supplementation of 400 IU or less of vitamin D3 or 1 g or less of calcium in postmenopausal women
- The data are insufficient to assess the harm and benefit of higher doses of supplemental vitamin D or calcium.
The task force stuck to their rules and weighed the data within the constraints of the specific question they were charged to address.
A challenge to clinicians attempting to apply rigidly defined, evidence-based conclusions is that the more precisely a question is addressed, the more limited is the answer’s applicability in clinical practice. Thus, Dr. Robin Dore, in this issue of the Journal, says that she believes there are benefits of vitamin D and calcium supplementation beyond primary prevention of fractures, and the benefits are not negated by the magnitude of potential harm (stated to be “small” by the USPSTF).
We are bombarded by clinical practice guidelines, and we don’t know which will be externally imposed as a measure of quality by which our practice performance will be assessed. In the clinic, we encounter a series of individual patients with whom we make individual treatment decisions. Like the inhabitants of Lake Wobegon, few of our patients are the “average patient” as derived from cross-sectional studies. Some have occult celiac disease, others are on proton pump inhibitors, some are lactose-intolerant, and some are on intermittent prednisone. For these patients, should the USPSTF guidelines warrant the extra effort and time to individually document why the guidelines don’t fit and why we made the clinical judgment to not follow them? Additionally, how many patients in the clinical studies used by the USPSTF fit into these or other unique categories and may have thus contaminated the data? I don’t see in these guidelines recommendations on how best to assess calcium and vitamin D intake and absorption in our patients in a practical manner. After all, supplementation is in addition to the actual intake of dietary sources.
For me, further confusion stems from trying to clinically couple the logic of such carefully analyzed, accurately stated, and tightly focused guidelines with what we already know (and apparently don’t know). We know that severe vitamin D deficiency clearly causes low bone density and fractures from osteomalacia, and the Institute of Medicine has previously stated that adequate vitamin D is beneficial and so should be supplemented.3 Vitamin D deficiency is a continuum and is very unlikely to be defined by the quantity of supplementation. Additionally, the USPSTF has previously published guidelines on supplementing vitamin D intake to prevent falls—falls being a major preventable cause of primary fractures. There seems to be some conceptual incongruence between these guidelines.
While epidemiologic studies have incorporated estimates of dietary and supplemental intake of calcium and vitamin D, what likely really matters is the absorption and the achieved blood levels and tissue incorporation. As shown in the examples above, many variables influence these in individual patients. And most troublesome is that there is no agreement as to the appropriate target level for circulating vitamin D. I agree with two-thirds of the task force’s conclusions—we have insufficient evidence. Are these really guidelines, or a plea for the gathering of appropriate outcome data?
The US Preventive Services Task Force (USPSTF) recently published a clinical guideline on the use of calcium and vitamin D supplements to prevent fractures in adults.1 This agency “strives to make accurate, up-to-date, and relevant recommendations about preventive services in primary care,”2 and within those parameters they generally succeed. But I am confused about the value of this specific guideline, and apparently I am not alone.
The task force came to several major conclusions about calcium and vitamin D supplementation to prevent fractures:
- There is insufficient evidence to offer guidance on supplementation in premeno-pausal women or in men
- One should not prescribe supplementation of 400 IU or less of vitamin D3 or 1 g or less of calcium in postmenopausal women
- The data are insufficient to assess the harm and benefit of higher doses of supplemental vitamin D or calcium.
The task force stuck to their rules and weighed the data within the constraints of the specific question they were charged to address.
A challenge to clinicians attempting to apply rigidly defined, evidence-based conclusions is that the more precisely a question is addressed, the more limited is the answer’s applicability in clinical practice. Thus, Dr. Robin Dore, in this issue of the Journal, says that she believes there are benefits of vitamin D and calcium supplementation beyond primary prevention of fractures, and the benefits are not negated by the magnitude of potential harm (stated to be “small” by the USPSTF).
We are bombarded by clinical practice guidelines, and we don’t know which will be externally imposed as a measure of quality by which our practice performance will be assessed. In the clinic, we encounter a series of individual patients with whom we make individual treatment decisions. Like the inhabitants of Lake Wobegon, few of our patients are the “average patient” as derived from cross-sectional studies. Some have occult celiac disease, others are on proton pump inhibitors, some are lactose-intolerant, and some are on intermittent prednisone. For these patients, should the USPSTF guidelines warrant the extra effort and time to individually document why the guidelines don’t fit and why we made the clinical judgment to not follow them? Additionally, how many patients in the clinical studies used by the USPSTF fit into these or other unique categories and may have thus contaminated the data? I don’t see in these guidelines recommendations on how best to assess calcium and vitamin D intake and absorption in our patients in a practical manner. After all, supplementation is in addition to the actual intake of dietary sources.
For me, further confusion stems from trying to clinically couple the logic of such carefully analyzed, accurately stated, and tightly focused guidelines with what we already know (and apparently don’t know). We know that severe vitamin D deficiency clearly causes low bone density and fractures from osteomalacia, and the Institute of Medicine has previously stated that adequate vitamin D is beneficial and so should be supplemented.3 Vitamin D deficiency is a continuum and is very unlikely to be defined by the quantity of supplementation. Additionally, the USPSTF has previously published guidelines on supplementing vitamin D intake to prevent falls—falls being a major preventable cause of primary fractures. There seems to be some conceptual incongruence between these guidelines.
While epidemiologic studies have incorporated estimates of dietary and supplemental intake of calcium and vitamin D, what likely really matters is the absorption and the achieved blood levels and tissue incorporation. As shown in the examples above, many variables influence these in individual patients. And most troublesome is that there is no agreement as to the appropriate target level for circulating vitamin D. I agree with two-thirds of the task force’s conclusions—we have insufficient evidence. Are these really guidelines, or a plea for the gathering of appropriate outcome data?
- Moyer VA, on behalf of the US Preventive Services Task Force. Vitamin D and calcium supplementation to prevent fractures in adults: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2013; E-pub ahead of print. http://annals.org/article.aspx?articleid=1655858. Accessed May 13, 2013.
- US Preventive Services Task Force. www.uspreventiveservicestaskforce.org. Accessed May 13, 2013.
- Standing Committee on the Scientific Evaluation of Dietary Reference Intakes. Food and Nutrition Board. Institute of Medicine. Dietary Reference Intakes on Calcium and Vitamin D. Washington, DC: The National Academic Press, 2010.
- Moyer VA, on behalf of the US Preventive Services Task Force. Vitamin D and calcium supplementation to prevent fractures in adults: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2013; E-pub ahead of print. http://annals.org/article.aspx?articleid=1655858. Accessed May 13, 2013.
- US Preventive Services Task Force. www.uspreventiveservicestaskforce.org. Accessed May 13, 2013.
- Standing Committee on the Scientific Evaluation of Dietary Reference Intakes. Food and Nutrition Board. Institute of Medicine. Dietary Reference Intakes on Calcium and Vitamin D. Washington, DC: The National Academic Press, 2010.