User login
Key 2010 publications in behavioral medicine
The effect of emotion on the heart is not confined to depression, but extends to a variety of mental states; as William Harvey described in 1628, “A mental disturbance provoking pain, excessive joy, hope or anxiety extends to the heart, where it affects its temper and rate, impairing general nutrition and vigor.”
In going beyond the well-established role of depression as a risk factor for heart disease, 2010 delivered several important publications recognizing anxiety, anger, and other forms of distress as key factors in the etiology of coronary heart disease (CHD). Other papers of merit elucidated new and overlooked insights into the pathways linking psychosocial stress and cardiovascular risk, and also considered psychologic states that appear to promote healthy functioning.
IMPACT OF NEGATIVE EMOTIONS ON RISK OF INCIDENT CORONARY HEART DISEASE
In a meta-analysis of 20 prospective studies that included 249,846 persons with a mean follow-up of 11.2 years, Roest et al1 examined the impact of anxiety characterized by the presence of anxiety symptoms or a diagnosis of anxiety disorder on incident CHD. Most of the studies adjusted for a broad array of relevant potential confounders. Findings suggest the presence of anxiety increases the risk of incident CHD by 26% (P P = .003).
In a meta-analysis of 25 prospective studies of 7,160 persons with a mean follow-up exceeding 10 years, Chida and Steptoe2 found that anger increased the risk of incident CHD by 19%, after adjustment for standard coronary risk factors. The effect was less stable than that associated with anxiety and depression, and when stratified by gender, the harmful effects of anger were more evident in men than in women. The effect of anger was attenuated when controlling for behavioral covariates. The association between anger and CHD did not hold for all ways of measuring anger, which suggests that the type of anger or the ability to regulate anger may be relevant to the relationship.
A study that did account for the type of anger expression on the risk of incident CHD was conducted by Davidson and Mostofsky.3 The independent effect of three distinct types of anger expression (constructive anger, destructive anger justification, and destructive anger rumination) on 10-year incident CHD was examined, controlling for other psychosocial factors. In men, higher scores for constructive anger were associated with a lower rate of CHD; in both men and women, higher scores for destructive anger justification were associated with an increased risk of CHD.
Insights gained from these studies are as follows:
- The impact of anxiety appears to be comparable to depression, and the effects of anxiety and depression are largely independent.
- If anxiety and depression co-occur, the effect on CHD is synergistic.
- The effects of anger are less clear; its impact may be independent of or dependent on other forms of psychologic distress.
- Distress in general appears to serve as a signal that something is wrong and needs to be addressed. If ignored, it may become chronic and unremitting; because symptoms of distress may lead to systemic dysregulation and increased CHD risk, they may indicate the need for increased surveillance and intervention.
WHY FOCUS ON THE BIOLOGY OF EMOTIONS?
A clear biologic explanation for the influence of emotional factors on physical health would serve to assuage skeptics who doubt that such a link exists or who attribute a common underlying genetic trait to both negative affect and heart disease. Further, focusing on the biology may help answer key questions with respect to emotions and disease processes: What is the damage incurred by negative emotional states and is it reversible? Can compensatory pathways be activated to bypass the mechanisms causing damage or slow the progression of disease?
Cardiac response to worry and stress
In one study attempting to shed light on relevant emotion-related biologic process, the prolonged physiologic effects of worry were examined. Worry episodes and stressful events were recorded hourly along with ambulatory heart rate and heart rate variability in 73 teachers for 4 days.4 Autonomic activity, as reflected by a concurrent elevation in heart rate and a decrease in heart rate variability, was increased up to 2 hours after a worry episode. The findings also suggested that the prolonged cardiac effects of separate worry episodes were independent.
Another study sought to determine whether heightened reactivity or delayed recovery to acute stress increases risk of cardiovascular disease.5 This meta-analysis included 36 studies to assess whether acute cardiovascular response to various laboratory stressors (ie, cognitive tasks, stress interviews, public speaking). Findings indicated that heightened cardiovascular reactivity was associated with worse cardiovascular outcomes, such as incident hypertension, coronary calcification, carotid intima-media thickness, and cardiovascular events over time.
Role of aldosterone overlooked
WHY CONSIDER RESILIENCE?
Because the absence of a deficit is not the same as the presence of an asset, greater insight into dysfunction may be gained by explicitly considering what promotes healthy functioning. Ameliorating distress has proven difficult; so, in studying resilience (including the ability to regulate affect), new targets for prevention and intervention may be identified. Although no meta-analysis of resilience factors has been published to date owing to the paucity of data, the studies that have been performed are generally rigorous and have demonstrated consistent findings.
For example, one prospective, well-controlled study of 1,739 men and women demonstrated a protective effect of positive affect (as ascertained by structured interview) against 10-year incident CHD.6 The risk of fatal or nonfatal ischemic heart disease events was reduced by 22% (P = .02) for each 1-point increase in positive affect, even after controlling for depression and negative emotions.
Biology of resilience: Counteracting cellular damage
Genomic changes can be induced by the relaxation response, as evidenced by the differential gene expression profiles of long-term daily practitioners of relaxation (ie, meditation, yoga), short-term (8-week) practitioners of relaxation, and healthy controls.8 Alterations in cellular metabolism, oxidative phosphorylation, and generation of reactive oxygen species that counteract proinflammatory responses, indicative of an adaptive response, were observed in both groups that practiced relaxation.
FUTURE DIRECTIONS
Whether and how the sources and effects of psychosocial stress and response to treatment differ across men and women deserves closer examination. A review by Low et al9 summarizes the current state of knowledge with respect to psychosocial factors and heart disease in women, noting that the sources of stress associated with increased CHD risk differ across men and women; psychosocial risk factors like depression and anxiety appear to increase risk for both men and women; work-related stress has larger effects in men while stress related to relationships and family responsibilities appear to have larger effects in women.
Although responses to psychosocial stress are not clearly different between men and women, intervention targeted at reducing distress is much less effective in reducing the risk of adverse events in women versus men. The mechanism to explain this difference in effectiveness of intervention urgently requires further exploration.
In conducting this work, several factors are important. The best time to intervene to reduce psychosocial distress is unknown; a key consideration will be, what is the best etiologic window for intervention? Perhaps a life-course approach that targets individuals with chronically high levels of emotional distress who also have multiple coronary risk factors, and that enhances their capacity to regulate emotions would prove superior to waiting until late in the disease process.
Another area that may prove fruitful is to consider in more depth the biology of the placebo effect and whether and how it may inform our understanding of resilience.
More generally, considering why interventions seem to influence outcomes so differently across men and women, applying a life course approach to determine the best etiologic window for prevention and intervention strategies, and conducting a more in-depth exploration of the biology of resilience may lead to improved capacity for population-based approaches to reducing the burden of CHD.
- Roest AM, Martens E, de Jonge P, Denollet J. Anxiety and risk of incident coronary heart disease: a meta-analysis. J Am Coll Cardiol 2010; 56:38–46.
- Chida Y, Steptoe A. The association of anger and hostility with future coronary heart disease: a meta-analytic review of prospective evidence. J Am Coll Cardiol 2009; 53:936–946.
- Davidson KW, Mostofsky E. Anger expression and risk of coronary heart disease: evidence from the Nova Scotia Health Survey. Am Heart J 2010; 159:199–206.
- Pieper S, Brosschot JF, van der Leeden R, Thayer J. Prolonged cardiac effects of momentary assessed stressful events and worry episodes. Psychosom Med 2010; 72:570–577.
- Chida Y, Steptoe A. Greater cardiovascular responses to laboratory mental stress are associated with poor subsequent cardiovascular risk status: a meta-analysis of prospective evidence. Hypertension 2010; 55:1026–1032.
- Davidson KW, Mostofsky E, Whang W. Don’t worry, be happy: positive affect and reduced 10-year incident coronary heart disease: the Canadian Nova Scotia Health Survey. Eur Heart J 2010; 31:1065–1070.
- Kubzansky LD, Park N, Peterson C, Vokonas P, Sparrow D. Healthy psychological functioning and incident coronary heart disease. Arch Gen Psychiatry 2000; 68:400–408.
- Dusek JA, Out HH, Wohlhueter AL, et al Genomic counterstress changes induced by the relaxation response. PLoS One 2008; 3:e2576.
- Low CA, Thurston RC, Matthews KA. Psychosocial factors in the development of heart disease in women: current research and future directions. Psychosom Med 2010; 72:842–854.
The effect of emotion on the heart is not confined to depression, but extends to a variety of mental states; as William Harvey described in 1628, “A mental disturbance provoking pain, excessive joy, hope or anxiety extends to the heart, where it affects its temper and rate, impairing general nutrition and vigor.”
In going beyond the well-established role of depression as a risk factor for heart disease, 2010 delivered several important publications recognizing anxiety, anger, and other forms of distress as key factors in the etiology of coronary heart disease (CHD). Other papers of merit elucidated new and overlooked insights into the pathways linking psychosocial stress and cardiovascular risk, and also considered psychologic states that appear to promote healthy functioning.
IMPACT OF NEGATIVE EMOTIONS ON RISK OF INCIDENT CORONARY HEART DISEASE
In a meta-analysis of 20 prospective studies that included 249,846 persons with a mean follow-up of 11.2 years, Roest et al1 examined the impact of anxiety characterized by the presence of anxiety symptoms or a diagnosis of anxiety disorder on incident CHD. Most of the studies adjusted for a broad array of relevant potential confounders. Findings suggest the presence of anxiety increases the risk of incident CHD by 26% (P P = .003).
In a meta-analysis of 25 prospective studies of 7,160 persons with a mean follow-up exceeding 10 years, Chida and Steptoe2 found that anger increased the risk of incident CHD by 19%, after adjustment for standard coronary risk factors. The effect was less stable than that associated with anxiety and depression, and when stratified by gender, the harmful effects of anger were more evident in men than in women. The effect of anger was attenuated when controlling for behavioral covariates. The association between anger and CHD did not hold for all ways of measuring anger, which suggests that the type of anger or the ability to regulate anger may be relevant to the relationship.
A study that did account for the type of anger expression on the risk of incident CHD was conducted by Davidson and Mostofsky.3 The independent effect of three distinct types of anger expression (constructive anger, destructive anger justification, and destructive anger rumination) on 10-year incident CHD was examined, controlling for other psychosocial factors. In men, higher scores for constructive anger were associated with a lower rate of CHD; in both men and women, higher scores for destructive anger justification were associated with an increased risk of CHD.
Insights gained from these studies are as follows:
- The impact of anxiety appears to be comparable to depression, and the effects of anxiety and depression are largely independent.
- If anxiety and depression co-occur, the effect on CHD is synergistic.
- The effects of anger are less clear; its impact may be independent of or dependent on other forms of psychologic distress.
- Distress in general appears to serve as a signal that something is wrong and needs to be addressed. If ignored, it may become chronic and unremitting; because symptoms of distress may lead to systemic dysregulation and increased CHD risk, they may indicate the need for increased surveillance and intervention.
WHY FOCUS ON THE BIOLOGY OF EMOTIONS?
A clear biologic explanation for the influence of emotional factors on physical health would serve to assuage skeptics who doubt that such a link exists or who attribute a common underlying genetic trait to both negative affect and heart disease. Further, focusing on the biology may help answer key questions with respect to emotions and disease processes: What is the damage incurred by negative emotional states and is it reversible? Can compensatory pathways be activated to bypass the mechanisms causing damage or slow the progression of disease?
Cardiac response to worry and stress
In one study attempting to shed light on relevant emotion-related biologic process, the prolonged physiologic effects of worry were examined. Worry episodes and stressful events were recorded hourly along with ambulatory heart rate and heart rate variability in 73 teachers for 4 days.4 Autonomic activity, as reflected by a concurrent elevation in heart rate and a decrease in heart rate variability, was increased up to 2 hours after a worry episode. The findings also suggested that the prolonged cardiac effects of separate worry episodes were independent.
Another study sought to determine whether heightened reactivity or delayed recovery to acute stress increases risk of cardiovascular disease.5 This meta-analysis included 36 studies to assess whether acute cardiovascular response to various laboratory stressors (ie, cognitive tasks, stress interviews, public speaking). Findings indicated that heightened cardiovascular reactivity was associated with worse cardiovascular outcomes, such as incident hypertension, coronary calcification, carotid intima-media thickness, and cardiovascular events over time.
Role of aldosterone overlooked
WHY CONSIDER RESILIENCE?
Because the absence of a deficit is not the same as the presence of an asset, greater insight into dysfunction may be gained by explicitly considering what promotes healthy functioning. Ameliorating distress has proven difficult; so, in studying resilience (including the ability to regulate affect), new targets for prevention and intervention may be identified. Although no meta-analysis of resilience factors has been published to date owing to the paucity of data, the studies that have been performed are generally rigorous and have demonstrated consistent findings.
For example, one prospective, well-controlled study of 1,739 men and women demonstrated a protective effect of positive affect (as ascertained by structured interview) against 10-year incident CHD.6 The risk of fatal or nonfatal ischemic heart disease events was reduced by 22% (P = .02) for each 1-point increase in positive affect, even after controlling for depression and negative emotions.
Biology of resilience: Counteracting cellular damage
Genomic changes can be induced by the relaxation response, as evidenced by the differential gene expression profiles of long-term daily practitioners of relaxation (ie, meditation, yoga), short-term (8-week) practitioners of relaxation, and healthy controls.8 Alterations in cellular metabolism, oxidative phosphorylation, and generation of reactive oxygen species that counteract proinflammatory responses, indicative of an adaptive response, were observed in both groups that practiced relaxation.
FUTURE DIRECTIONS
Whether and how the sources and effects of psychosocial stress and response to treatment differ across men and women deserves closer examination. A review by Low et al9 summarizes the current state of knowledge with respect to psychosocial factors and heart disease in women, noting that the sources of stress associated with increased CHD risk differ across men and women; psychosocial risk factors like depression and anxiety appear to increase risk for both men and women; work-related stress has larger effects in men while stress related to relationships and family responsibilities appear to have larger effects in women.
Although responses to psychosocial stress are not clearly different between men and women, intervention targeted at reducing distress is much less effective in reducing the risk of adverse events in women versus men. The mechanism to explain this difference in effectiveness of intervention urgently requires further exploration.
In conducting this work, several factors are important. The best time to intervene to reduce psychosocial distress is unknown; a key consideration will be, what is the best etiologic window for intervention? Perhaps a life-course approach that targets individuals with chronically high levels of emotional distress who also have multiple coronary risk factors, and that enhances their capacity to regulate emotions would prove superior to waiting until late in the disease process.
Another area that may prove fruitful is to consider in more depth the biology of the placebo effect and whether and how it may inform our understanding of resilience.
More generally, considering why interventions seem to influence outcomes so differently across men and women, applying a life course approach to determine the best etiologic window for prevention and intervention strategies, and conducting a more in-depth exploration of the biology of resilience may lead to improved capacity for population-based approaches to reducing the burden of CHD.
The effect of emotion on the heart is not confined to depression, but extends to a variety of mental states; as William Harvey described in 1628, “A mental disturbance provoking pain, excessive joy, hope or anxiety extends to the heart, where it affects its temper and rate, impairing general nutrition and vigor.”
In going beyond the well-established role of depression as a risk factor for heart disease, 2010 delivered several important publications recognizing anxiety, anger, and other forms of distress as key factors in the etiology of coronary heart disease (CHD). Other papers of merit elucidated new and overlooked insights into the pathways linking psychosocial stress and cardiovascular risk, and also considered psychologic states that appear to promote healthy functioning.
IMPACT OF NEGATIVE EMOTIONS ON RISK OF INCIDENT CORONARY HEART DISEASE
In a meta-analysis of 20 prospective studies that included 249,846 persons with a mean follow-up of 11.2 years, Roest et al1 examined the impact of anxiety characterized by the presence of anxiety symptoms or a diagnosis of anxiety disorder on incident CHD. Most of the studies adjusted for a broad array of relevant potential confounders. Findings suggest the presence of anxiety increases the risk of incident CHD by 26% (P P = .003).
In a meta-analysis of 25 prospective studies of 7,160 persons with a mean follow-up exceeding 10 years, Chida and Steptoe2 found that anger increased the risk of incident CHD by 19%, after adjustment for standard coronary risk factors. The effect was less stable than that associated with anxiety and depression, and when stratified by gender, the harmful effects of anger were more evident in men than in women. The effect of anger was attenuated when controlling for behavioral covariates. The association between anger and CHD did not hold for all ways of measuring anger, which suggests that the type of anger or the ability to regulate anger may be relevant to the relationship.
A study that did account for the type of anger expression on the risk of incident CHD was conducted by Davidson and Mostofsky.3 The independent effect of three distinct types of anger expression (constructive anger, destructive anger justification, and destructive anger rumination) on 10-year incident CHD was examined, controlling for other psychosocial factors. In men, higher scores for constructive anger were associated with a lower rate of CHD; in both men and women, higher scores for destructive anger justification were associated with an increased risk of CHD.
Insights gained from these studies are as follows:
- The impact of anxiety appears to be comparable to depression, and the effects of anxiety and depression are largely independent.
- If anxiety and depression co-occur, the effect on CHD is synergistic.
- The effects of anger are less clear; its impact may be independent of or dependent on other forms of psychologic distress.
- Distress in general appears to serve as a signal that something is wrong and needs to be addressed. If ignored, it may become chronic and unremitting; because symptoms of distress may lead to systemic dysregulation and increased CHD risk, they may indicate the need for increased surveillance and intervention.
WHY FOCUS ON THE BIOLOGY OF EMOTIONS?
A clear biologic explanation for the influence of emotional factors on physical health would serve to assuage skeptics who doubt that such a link exists or who attribute a common underlying genetic trait to both negative affect and heart disease. Further, focusing on the biology may help answer key questions with respect to emotions and disease processes: What is the damage incurred by negative emotional states and is it reversible? Can compensatory pathways be activated to bypass the mechanisms causing damage or slow the progression of disease?
Cardiac response to worry and stress
In one study attempting to shed light on relevant emotion-related biologic process, the prolonged physiologic effects of worry were examined. Worry episodes and stressful events were recorded hourly along with ambulatory heart rate and heart rate variability in 73 teachers for 4 days.4 Autonomic activity, as reflected by a concurrent elevation in heart rate and a decrease in heart rate variability, was increased up to 2 hours after a worry episode. The findings also suggested that the prolonged cardiac effects of separate worry episodes were independent.
Another study sought to determine whether heightened reactivity or delayed recovery to acute stress increases risk of cardiovascular disease.5 This meta-analysis included 36 studies to assess whether acute cardiovascular response to various laboratory stressors (ie, cognitive tasks, stress interviews, public speaking). Findings indicated that heightened cardiovascular reactivity was associated with worse cardiovascular outcomes, such as incident hypertension, coronary calcification, carotid intima-media thickness, and cardiovascular events over time.
Role of aldosterone overlooked
WHY CONSIDER RESILIENCE?
Because the absence of a deficit is not the same as the presence of an asset, greater insight into dysfunction may be gained by explicitly considering what promotes healthy functioning. Ameliorating distress has proven difficult; so, in studying resilience (including the ability to regulate affect), new targets for prevention and intervention may be identified. Although no meta-analysis of resilience factors has been published to date owing to the paucity of data, the studies that have been performed are generally rigorous and have demonstrated consistent findings.
For example, one prospective, well-controlled study of 1,739 men and women demonstrated a protective effect of positive affect (as ascertained by structured interview) against 10-year incident CHD.6 The risk of fatal or nonfatal ischemic heart disease events was reduced by 22% (P = .02) for each 1-point increase in positive affect, even after controlling for depression and negative emotions.
Biology of resilience: Counteracting cellular damage
Genomic changes can be induced by the relaxation response, as evidenced by the differential gene expression profiles of long-term daily practitioners of relaxation (ie, meditation, yoga), short-term (8-week) practitioners of relaxation, and healthy controls.8 Alterations in cellular metabolism, oxidative phosphorylation, and generation of reactive oxygen species that counteract proinflammatory responses, indicative of an adaptive response, were observed in both groups that practiced relaxation.
FUTURE DIRECTIONS
Whether and how the sources and effects of psychosocial stress and response to treatment differ across men and women deserves closer examination. A review by Low et al9 summarizes the current state of knowledge with respect to psychosocial factors and heart disease in women, noting that the sources of stress associated with increased CHD risk differ across men and women; psychosocial risk factors like depression and anxiety appear to increase risk for both men and women; work-related stress has larger effects in men while stress related to relationships and family responsibilities appear to have larger effects in women.
Although responses to psychosocial stress are not clearly different between men and women, intervention targeted at reducing distress is much less effective in reducing the risk of adverse events in women versus men. The mechanism to explain this difference in effectiveness of intervention urgently requires further exploration.
In conducting this work, several factors are important. The best time to intervene to reduce psychosocial distress is unknown; a key consideration will be, what is the best etiologic window for intervention? Perhaps a life-course approach that targets individuals with chronically high levels of emotional distress who also have multiple coronary risk factors, and that enhances their capacity to regulate emotions would prove superior to waiting until late in the disease process.
Another area that may prove fruitful is to consider in more depth the biology of the placebo effect and whether and how it may inform our understanding of resilience.
More generally, considering why interventions seem to influence outcomes so differently across men and women, applying a life course approach to determine the best etiologic window for prevention and intervention strategies, and conducting a more in-depth exploration of the biology of resilience may lead to improved capacity for population-based approaches to reducing the burden of CHD.
- Roest AM, Martens E, de Jonge P, Denollet J. Anxiety and risk of incident coronary heart disease: a meta-analysis. J Am Coll Cardiol 2010; 56:38–46.
- Chida Y, Steptoe A. The association of anger and hostility with future coronary heart disease: a meta-analytic review of prospective evidence. J Am Coll Cardiol 2009; 53:936–946.
- Davidson KW, Mostofsky E. Anger expression and risk of coronary heart disease: evidence from the Nova Scotia Health Survey. Am Heart J 2010; 159:199–206.
- Pieper S, Brosschot JF, van der Leeden R, Thayer J. Prolonged cardiac effects of momentary assessed stressful events and worry episodes. Psychosom Med 2010; 72:570–577.
- Chida Y, Steptoe A. Greater cardiovascular responses to laboratory mental stress are associated with poor subsequent cardiovascular risk status: a meta-analysis of prospective evidence. Hypertension 2010; 55:1026–1032.
- Davidson KW, Mostofsky E, Whang W. Don’t worry, be happy: positive affect and reduced 10-year incident coronary heart disease: the Canadian Nova Scotia Health Survey. Eur Heart J 2010; 31:1065–1070.
- Kubzansky LD, Park N, Peterson C, Vokonas P, Sparrow D. Healthy psychological functioning and incident coronary heart disease. Arch Gen Psychiatry 2000; 68:400–408.
- Dusek JA, Out HH, Wohlhueter AL, et al Genomic counterstress changes induced by the relaxation response. PLoS One 2008; 3:e2576.
- Low CA, Thurston RC, Matthews KA. Psychosocial factors in the development of heart disease in women: current research and future directions. Psychosom Med 2010; 72:842–854.
- Roest AM, Martens E, de Jonge P, Denollet J. Anxiety and risk of incident coronary heart disease: a meta-analysis. J Am Coll Cardiol 2010; 56:38–46.
- Chida Y, Steptoe A. The association of anger and hostility with future coronary heart disease: a meta-analytic review of prospective evidence. J Am Coll Cardiol 2009; 53:936–946.
- Davidson KW, Mostofsky E. Anger expression and risk of coronary heart disease: evidence from the Nova Scotia Health Survey. Am Heart J 2010; 159:199–206.
- Pieper S, Brosschot JF, van der Leeden R, Thayer J. Prolonged cardiac effects of momentary assessed stressful events and worry episodes. Psychosom Med 2010; 72:570–577.
- Chida Y, Steptoe A. Greater cardiovascular responses to laboratory mental stress are associated with poor subsequent cardiovascular risk status: a meta-analysis of prospective evidence. Hypertension 2010; 55:1026–1032.
- Davidson KW, Mostofsky E, Whang W. Don’t worry, be happy: positive affect and reduced 10-year incident coronary heart disease: the Canadian Nova Scotia Health Survey. Eur Heart J 2010; 31:1065–1070.
- Kubzansky LD, Park N, Peterson C, Vokonas P, Sparrow D. Healthy psychological functioning and incident coronary heart disease. Arch Gen Psychiatry 2000; 68:400–408.
- Dusek JA, Out HH, Wohlhueter AL, et al Genomic counterstress changes induced by the relaxation response. PLoS One 2008; 3:e2576.
- Low CA, Thurston RC, Matthews KA. Psychosocial factors in the development of heart disease in women: current research and future directions. Psychosom Med 2010; 72:842–854.
Imaging for autonomic dysfunction
The autonomic nervous system (ANS), composed of the sympathetic and parasympathetic nervous systems, governs our adaptation to changing environments such as physical threats or changes in temperature. It has been difficult to elucidate this process in humans, however, because of limitations in neuroimaging caused by artifacts from cardiorespiratory sources. This article reviews structural and functional imaging that can provide insights into the ANS.
STRUCTURAL IMAGING
The two main subcortical areas of interest for imaging are the lateral hypothalamic area and the paraventricular nucleus, but visualization is difficult. The hypothalamus occupies a volumetric area of the brain no larger than 20 voxels; individual substructures of the hypothalamus therefore cannot easily be viewed by conventional imaging. The larger voxel size of functional MRI (fMRI) mean that fMRI of the hypothalamus can display 1 voxel at most.
Most brainstem nuclei are motor nuclei that affect autonomic responses, either sympathetic or parasympathetic. These nuclei are difficult to visualize on conventional MRI for two reasons: the nuclei are small, and may be the size of only 1 to 2 voxels. More important, MRI contrast between these nuclei and surrounding parenchyma is minimal because these structures “blend in” with the surrounding brain and are difficult to visualize singly. Examples of these major brainstem sympathetic nuclei are the periaqueductal gray substance, parabrachial nuclei, solitary nucleus, and the hypothalamospinal tract; examples of the major brainstem parasympathetic nuclei are the dorsal nucleus of the vagus nerve and the nucleus ambiguus.
The areas of the ANS under cortical control are more integrative, with influence from higher cognitive function—for example, the panic or fear associated with public speaking. Regions of subcortical control involve the basal ganglia and hypothalamus, which regulate primitive, subconscious activity, such as “fight or flight” response, pain reaction, and fear of snakes, all of which affect multiple motor nuclei. Several specific sympathetic and parasympathetic motor nuclei directly affect heart rate and blood pressure and act as relay stations for sensory impulses that reach the cerebral cortex.
NEUROLOGIC PROCESSES AND CARDIAC EFFECTS
MS is classically a disease of white matter, although it can also affect gray matter. Autonomic dysfunction is common, affecting as many as 50% of MS patients with symptoms that include orthostatic dizziness, bladder disturbances, temperature instability, gastrointestinal disturbances, and sweating.1–4 The effect of autonomic dysfunction on disease activity is unclear. Multiple brainstem lesions are evident on MRI, and may be linked to cardiac autonomic dysfunction. The variability of MS contributes to the difficulty of using imaging to identify culprit lesions.
Stroke causes autonomic dysfunction, with the specific manifestations dependent on the region of the brain involved. In cases of right middle cerebral artery infarct affecting the right insula, an increased incidence of cardiac arrhythmias, cardiac death, and catecholamine production ensues.5–7 Medullary infarcts have been shown to produce significant autonomic dysfunction.8,9
Ictal and interictal cardiac manifestations in epilepsy often precede seizure onset.1 Common cardiac changes are ictal tachycardia or ictal bradycardia, or both, with no clear relationship to the location or type of seizure. Evidence suggests that heart rate variability changes in epilepsy result from interictal autonomic alterations, including sympathetic or parasympathetic dominance. Investigation of baroreflex responses with temporal lobe epilepsy has uncovered decreased baroreflex sensitivity. There is no reliable correlation between sympathetic or parasympathetic upregulation or downregulation and brain MRI findings, however.
Autonomic dysfunction in the form of orthostatic hypotension has been documented in patients with mass effect from tumors, for example posterior fossa epidermoid tumors, wherein tumor resection results in improved autonomic function.10
FUNCTIONAL BRAIN IMAGING IN GENERAL
Direct visualization of heart-brain interactions is the goal when assessing ANS function. Positron emission tomography (PET) produces quantitative images, but spatial and temporal resolutions are vastly superior with fMRI.11 Further, radiation exposure is low with fMRI, allowing for safe repeat imaging.
Ogawa et al12 first demonstrated that in vivo images of brain microvasculature are affected by blood oxygen level, and that blood oxygenation reduced vascular signal loss. Therefore, blood oxygenation level–dependent (BOLD) contrast added to MRI could complement PET-like measurements in the study of regional brain activity.
Bilateral finger tapping with intermittent periods of rest is associated with a pattern of increasing and decreasing intensity of fMRI signals in involved brain regions that reflect the periods of activity and rest. This technique has been used to locate brain voxels with similar patterns of activity, enabling the creation of familiar color brain mapping. A challenge posed by autonomic fMRI in such brain mapping is that fMRI is susceptible to artifacts (Figure 3). For example, a movement of the head as little as 1 mm inside the MRI scanner—a distance comparable to the size of autonomic structures—can produce a motion artifact (false activation of brain regions) that can affect statistical significance. In addition, many ANS regions of the brain are near osseous structures (for example the brainstem and skull base) that cause signal distortion and loss.
REQUIREMENTS FOR AUTONOMIC fMRI
The tasks chosen to visualize brain control of autonomic function must naturally elicit an autonomic response. The difficulty is that untrained persons have little or no volitional control over autonomic functions, so the task and its analysis must be designed carefully and be MRI-compatible. Any motion will degrade the image; further, the capacity for the MRI environment to corrupt the measurements can limit the potential tasks for measurement.
Possible stimuli for eliciting a sympathetic response include pain, fear, anticipation, anxiety, concentration or memory, cold pressor, Stroop test, breathing tests, and maximal hand grip. Examples of parasympathetic stimuli are the Valsalva maneuver and paced breathing. The responses to stimuli (ie, heart rate, heart rate variability, blood pressure, galvanic skin response, papillary response) must be monitored to compare the data obtained from fMRI. MRI-compatible equipment is now available for measuring many of these responses.
Identifying areas activated during tasks
Functional neuroimaging with PET and fMRI has shown consistently that the anterior cingulate is activated during multiple tasks designed to elicit an autonomic response (gambling anticipation, emotional response to faces, Stroop test).11
In a study designed to test autonomic interoceptive awareness, subjects underwent fMRI while they were asked to judge the timing of their heartbeats to auditory tones that were either synchronized with their heartbeat or delayed by 500 msec.13 Areas of enhanced activity during the task were the right insular cortex, anterior cingulate, parietal lobes, and operculum.
Characterizing brainstem sites
It is difficult to achieve visualization of areas within the brainstem that govern autonomic responses. These regions are small and motion artifacts are common because of brainstem movement with the cardiac pulse. With fMRI, Topolovec et al14 were able to characterize brainstem sites involved in autonomic control, demonstrating activation of the nucleus of the solitary tract and parabrachial nucleus.
A review of four fMRI studies of stressor-evoked blood pressure reactivity demonstrated activation in corticolimbic areas, including the cingulate cortex, insula, amygdala, and cortical and subcortical areas that are involved in hemodynamic and metabolic support for stress-related behavioral responses.16
FUNCTIONAL BRAIN IMAGING IN DISEASE STATES
There are few studies of functional brain imaging in patients with disease because of the challenges involved. The studies are difficult to perform on sick patients because of the unfriendly MRI environment, with struct requirements for attention and participation. Furthermore, autonomic responses may be blunted, making physiologic comparisons difficult. In addition, there is evidence that BOLD may be intrinsically impaired in disease states. Unlike fMRI studies to locate brain regions involved in simple tasks such as finger tapping, which can be performed in a single subject, detecting changes in autonomic responses in disease states requires averaging over studies of multiple patients.
Woo et al17 used fMRI to compare brain regions of activation in six patients with heart failure and 16 controls upon a forehead cold pressor challenge. Increases in heart rate were measured in the patients with heart failure with application of the cold stimulus. Larger neural fMRI signal responses in patients with heart failure were observed in 14 brain regions, whereas reduced fMRI activity was observed in 15 other brain regions in the heart failure patients. Based on the results, the investigators suggested that heart failure may be associated with altered sympathetic and parasympathetic activity, and that these dysfunctions might contribute to the progression of heart failure.
Gianaros et al18 found fMRI evidence for a correlation between carotid artery intima-media thickness, a surrogate measure for carotid artery or coronary artery disease, and altered ANS reaction to fear using a fearful faces paradigm.
CONCLUSION
Functional MRI of heart-brain interactions has strong potential for normal subjects, in whom the BOLD effect is small, within the limits of motion and susceptibility artifacts. Typically, such applications require averaging results over multiple subjects. Its potential utility in disease states is less significant because of the additional limitations of MRI with sick patients (the MRI environment, blunting of autonomic response in disease, possible impairment of BOLD), but continued investigation is warranted.
- Sevcencu C, Struijk JJ. Autonomic alterations and cardiac changes in epilepsy. Epilepsia 2010; 51:725–737.
- Kodounis A, Stamboulis E, Constantinidis TS, Liolios A. Measurement of autonomic dysregulation in multiple sclerosis. Acta Neurol Scand 2005; 112:403–408.
- Flachenecker P, Wolf A, Krauser M, Hartung HP, Reiners K. Cardiovascular autonomic dysfunction in multiple sclerosis: correlation with orthostatic intolerance. J Neurol 1999; 246:578–586.
- Kulcu DG, Akbas B, Citci B, Cihangiroglu M. Autonomic dysreflexia in a man with multiple sclerosis. J Spinal Cord Med 2009; 32:198–203.
- Abboud H, Berroir S, Labreuche J, Orjuele K, Amarenco O. Insular involvement in brain infarction increases risk for cardiac arrhythmia and death. Ann Neurol 2006; 59:691–699.
- Tokgozoglu SL, Batur MK, Topcuoglu MA, Saribas O, Kes S, Oto A. Effects of stroke localization on cardiac autonomic balance and sudden death. Stroke 1999; 30:1301–1311.
- Strittmatter M, Meyer S, Fischer C, Georg T, Schmitz B. Location-dependent patterns in cardio-autonomic dysfunction in ischaemic stroke. Eur Neurol 2003; 50:30–38.
- Lassman AB, Mayer SA. Paroxysmal apnea and vasomotor instability following medullary infarction. Arch Neurol 2005; 62:1286–1288.
- Deluca C, Tinazzi M, Bovi P, Rizzuto N, Moretto G. Limb ataxia and proximal intracranial territory brain infarcts: clinical and topographical correlations. J Neurol Neurosurg Psychiatry 2007; 78:832–835.
- Gómez-Esteban JC, Berganzo K, Tijero B, Barcena J, Zarranz JJ. Orthostatic hypotension associated with an epidermoid tumor of the IV ventricle. J Neurol 2009; 256:1357–1359.
- Critchley HD. Neural mechanisms of autonomic, affective, and cognitive integration. J Comp Neurol 2005; 493:154–166.
- Ogawa S, Lee TM, Kay AR, Tank DW. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proc Natl Acad Sci USA 1990; 87:9868–9872.
- Critchley HD. The human cortex responds to an interoceptive challenge. Proc Natl Acad Sci USA 2004; 101:6333–6334.
- Topolovec JC, Gati JS, Menon RS, Shoemaker JK, Cechetto DF. Human cardiovascular and gustatory brainstem sites observed by functional magnetic resonance imaging. J Comp Neurol 2004; 471:446–461.
- Napadow V, Dhond R, Conti G, Makris N, Brown EN, Barbieri R. Brain correlates of autonomic modulation: combining heart rate variability with fMRI. Neuroimage 2008; 42:169–177.
- Gianaros PJ, Sheu LK. A review of neuroimaging studies of stressor-evoked blood pressure reactivity: emerging evidence for a brain-body pathway to coronary heart disease risk. Neuroimage 2009; 47:922–936.
- Woo MA, Macey PM, Keens PT, et al Functional abnormalities in brain areas that mediate autonomic nervous system control in advanced heart failure. J Card Fail 2005; 11:437–446.
- Gianaros PJ, Hariri AR, Sheu LK, et al Preclinical atherosclerosis covaries with individual differences in reactivity and functional connectivity of the amygdala. Biol Psych 2009; 65:943–950.
The autonomic nervous system (ANS), composed of the sympathetic and parasympathetic nervous systems, governs our adaptation to changing environments such as physical threats or changes in temperature. It has been difficult to elucidate this process in humans, however, because of limitations in neuroimaging caused by artifacts from cardiorespiratory sources. This article reviews structural and functional imaging that can provide insights into the ANS.
STRUCTURAL IMAGING
The two main subcortical areas of interest for imaging are the lateral hypothalamic area and the paraventricular nucleus, but visualization is difficult. The hypothalamus occupies a volumetric area of the brain no larger than 20 voxels; individual substructures of the hypothalamus therefore cannot easily be viewed by conventional imaging. The larger voxel size of functional MRI (fMRI) mean that fMRI of the hypothalamus can display 1 voxel at most.
Most brainstem nuclei are motor nuclei that affect autonomic responses, either sympathetic or parasympathetic. These nuclei are difficult to visualize on conventional MRI for two reasons: the nuclei are small, and may be the size of only 1 to 2 voxels. More important, MRI contrast between these nuclei and surrounding parenchyma is minimal because these structures “blend in” with the surrounding brain and are difficult to visualize singly. Examples of these major brainstem sympathetic nuclei are the periaqueductal gray substance, parabrachial nuclei, solitary nucleus, and the hypothalamospinal tract; examples of the major brainstem parasympathetic nuclei are the dorsal nucleus of the vagus nerve and the nucleus ambiguus.
The areas of the ANS under cortical control are more integrative, with influence from higher cognitive function—for example, the panic or fear associated with public speaking. Regions of subcortical control involve the basal ganglia and hypothalamus, which regulate primitive, subconscious activity, such as “fight or flight” response, pain reaction, and fear of snakes, all of which affect multiple motor nuclei. Several specific sympathetic and parasympathetic motor nuclei directly affect heart rate and blood pressure and act as relay stations for sensory impulses that reach the cerebral cortex.
NEUROLOGIC PROCESSES AND CARDIAC EFFECTS
MS is classically a disease of white matter, although it can also affect gray matter. Autonomic dysfunction is common, affecting as many as 50% of MS patients with symptoms that include orthostatic dizziness, bladder disturbances, temperature instability, gastrointestinal disturbances, and sweating.1–4 The effect of autonomic dysfunction on disease activity is unclear. Multiple brainstem lesions are evident on MRI, and may be linked to cardiac autonomic dysfunction. The variability of MS contributes to the difficulty of using imaging to identify culprit lesions.
Stroke causes autonomic dysfunction, with the specific manifestations dependent on the region of the brain involved. In cases of right middle cerebral artery infarct affecting the right insula, an increased incidence of cardiac arrhythmias, cardiac death, and catecholamine production ensues.5–7 Medullary infarcts have been shown to produce significant autonomic dysfunction.8,9
Ictal and interictal cardiac manifestations in epilepsy often precede seizure onset.1 Common cardiac changes are ictal tachycardia or ictal bradycardia, or both, with no clear relationship to the location or type of seizure. Evidence suggests that heart rate variability changes in epilepsy result from interictal autonomic alterations, including sympathetic or parasympathetic dominance. Investigation of baroreflex responses with temporal lobe epilepsy has uncovered decreased baroreflex sensitivity. There is no reliable correlation between sympathetic or parasympathetic upregulation or downregulation and brain MRI findings, however.
Autonomic dysfunction in the form of orthostatic hypotension has been documented in patients with mass effect from tumors, for example posterior fossa epidermoid tumors, wherein tumor resection results in improved autonomic function.10
FUNCTIONAL BRAIN IMAGING IN GENERAL
Direct visualization of heart-brain interactions is the goal when assessing ANS function. Positron emission tomography (PET) produces quantitative images, but spatial and temporal resolutions are vastly superior with fMRI.11 Further, radiation exposure is low with fMRI, allowing for safe repeat imaging.
Ogawa et al12 first demonstrated that in vivo images of brain microvasculature are affected by blood oxygen level, and that blood oxygenation reduced vascular signal loss. Therefore, blood oxygenation level–dependent (BOLD) contrast added to MRI could complement PET-like measurements in the study of regional brain activity.
Bilateral finger tapping with intermittent periods of rest is associated with a pattern of increasing and decreasing intensity of fMRI signals in involved brain regions that reflect the periods of activity and rest. This technique has been used to locate brain voxels with similar patterns of activity, enabling the creation of familiar color brain mapping. A challenge posed by autonomic fMRI in such brain mapping is that fMRI is susceptible to artifacts (Figure 3). For example, a movement of the head as little as 1 mm inside the MRI scanner—a distance comparable to the size of autonomic structures—can produce a motion artifact (false activation of brain regions) that can affect statistical significance. In addition, many ANS regions of the brain are near osseous structures (for example the brainstem and skull base) that cause signal distortion and loss.
REQUIREMENTS FOR AUTONOMIC fMRI
The tasks chosen to visualize brain control of autonomic function must naturally elicit an autonomic response. The difficulty is that untrained persons have little or no volitional control over autonomic functions, so the task and its analysis must be designed carefully and be MRI-compatible. Any motion will degrade the image; further, the capacity for the MRI environment to corrupt the measurements can limit the potential tasks for measurement.
Possible stimuli for eliciting a sympathetic response include pain, fear, anticipation, anxiety, concentration or memory, cold pressor, Stroop test, breathing tests, and maximal hand grip. Examples of parasympathetic stimuli are the Valsalva maneuver and paced breathing. The responses to stimuli (ie, heart rate, heart rate variability, blood pressure, galvanic skin response, papillary response) must be monitored to compare the data obtained from fMRI. MRI-compatible equipment is now available for measuring many of these responses.
Identifying areas activated during tasks
Functional neuroimaging with PET and fMRI has shown consistently that the anterior cingulate is activated during multiple tasks designed to elicit an autonomic response (gambling anticipation, emotional response to faces, Stroop test).11
In a study designed to test autonomic interoceptive awareness, subjects underwent fMRI while they were asked to judge the timing of their heartbeats to auditory tones that were either synchronized with their heartbeat or delayed by 500 msec.13 Areas of enhanced activity during the task were the right insular cortex, anterior cingulate, parietal lobes, and operculum.
Characterizing brainstem sites
It is difficult to achieve visualization of areas within the brainstem that govern autonomic responses. These regions are small and motion artifacts are common because of brainstem movement with the cardiac pulse. With fMRI, Topolovec et al14 were able to characterize brainstem sites involved in autonomic control, demonstrating activation of the nucleus of the solitary tract and parabrachial nucleus.
A review of four fMRI studies of stressor-evoked blood pressure reactivity demonstrated activation in corticolimbic areas, including the cingulate cortex, insula, amygdala, and cortical and subcortical areas that are involved in hemodynamic and metabolic support for stress-related behavioral responses.16
FUNCTIONAL BRAIN IMAGING IN DISEASE STATES
There are few studies of functional brain imaging in patients with disease because of the challenges involved. The studies are difficult to perform on sick patients because of the unfriendly MRI environment, with struct requirements for attention and participation. Furthermore, autonomic responses may be blunted, making physiologic comparisons difficult. In addition, there is evidence that BOLD may be intrinsically impaired in disease states. Unlike fMRI studies to locate brain regions involved in simple tasks such as finger tapping, which can be performed in a single subject, detecting changes in autonomic responses in disease states requires averaging over studies of multiple patients.
Woo et al17 used fMRI to compare brain regions of activation in six patients with heart failure and 16 controls upon a forehead cold pressor challenge. Increases in heart rate were measured in the patients with heart failure with application of the cold stimulus. Larger neural fMRI signal responses in patients with heart failure were observed in 14 brain regions, whereas reduced fMRI activity was observed in 15 other brain regions in the heart failure patients. Based on the results, the investigators suggested that heart failure may be associated with altered sympathetic and parasympathetic activity, and that these dysfunctions might contribute to the progression of heart failure.
Gianaros et al18 found fMRI evidence for a correlation between carotid artery intima-media thickness, a surrogate measure for carotid artery or coronary artery disease, and altered ANS reaction to fear using a fearful faces paradigm.
CONCLUSION
Functional MRI of heart-brain interactions has strong potential for normal subjects, in whom the BOLD effect is small, within the limits of motion and susceptibility artifacts. Typically, such applications require averaging results over multiple subjects. Its potential utility in disease states is less significant because of the additional limitations of MRI with sick patients (the MRI environment, blunting of autonomic response in disease, possible impairment of BOLD), but continued investigation is warranted.
The autonomic nervous system (ANS), composed of the sympathetic and parasympathetic nervous systems, governs our adaptation to changing environments such as physical threats or changes in temperature. It has been difficult to elucidate this process in humans, however, because of limitations in neuroimaging caused by artifacts from cardiorespiratory sources. This article reviews structural and functional imaging that can provide insights into the ANS.
STRUCTURAL IMAGING
The two main subcortical areas of interest for imaging are the lateral hypothalamic area and the paraventricular nucleus, but visualization is difficult. The hypothalamus occupies a volumetric area of the brain no larger than 20 voxels; individual substructures of the hypothalamus therefore cannot easily be viewed by conventional imaging. The larger voxel size of functional MRI (fMRI) mean that fMRI of the hypothalamus can display 1 voxel at most.
Most brainstem nuclei are motor nuclei that affect autonomic responses, either sympathetic or parasympathetic. These nuclei are difficult to visualize on conventional MRI for two reasons: the nuclei are small, and may be the size of only 1 to 2 voxels. More important, MRI contrast between these nuclei and surrounding parenchyma is minimal because these structures “blend in” with the surrounding brain and are difficult to visualize singly. Examples of these major brainstem sympathetic nuclei are the periaqueductal gray substance, parabrachial nuclei, solitary nucleus, and the hypothalamospinal tract; examples of the major brainstem parasympathetic nuclei are the dorsal nucleus of the vagus nerve and the nucleus ambiguus.
The areas of the ANS under cortical control are more integrative, with influence from higher cognitive function—for example, the panic or fear associated with public speaking. Regions of subcortical control involve the basal ganglia and hypothalamus, which regulate primitive, subconscious activity, such as “fight or flight” response, pain reaction, and fear of snakes, all of which affect multiple motor nuclei. Several specific sympathetic and parasympathetic motor nuclei directly affect heart rate and blood pressure and act as relay stations for sensory impulses that reach the cerebral cortex.
NEUROLOGIC PROCESSES AND CARDIAC EFFECTS
MS is classically a disease of white matter, although it can also affect gray matter. Autonomic dysfunction is common, affecting as many as 50% of MS patients with symptoms that include orthostatic dizziness, bladder disturbances, temperature instability, gastrointestinal disturbances, and sweating.1–4 The effect of autonomic dysfunction on disease activity is unclear. Multiple brainstem lesions are evident on MRI, and may be linked to cardiac autonomic dysfunction. The variability of MS contributes to the difficulty of using imaging to identify culprit lesions.
Stroke causes autonomic dysfunction, with the specific manifestations dependent on the region of the brain involved. In cases of right middle cerebral artery infarct affecting the right insula, an increased incidence of cardiac arrhythmias, cardiac death, and catecholamine production ensues.5–7 Medullary infarcts have been shown to produce significant autonomic dysfunction.8,9
Ictal and interictal cardiac manifestations in epilepsy often precede seizure onset.1 Common cardiac changes are ictal tachycardia or ictal bradycardia, or both, with no clear relationship to the location or type of seizure. Evidence suggests that heart rate variability changes in epilepsy result from interictal autonomic alterations, including sympathetic or parasympathetic dominance. Investigation of baroreflex responses with temporal lobe epilepsy has uncovered decreased baroreflex sensitivity. There is no reliable correlation between sympathetic or parasympathetic upregulation or downregulation and brain MRI findings, however.
Autonomic dysfunction in the form of orthostatic hypotension has been documented in patients with mass effect from tumors, for example posterior fossa epidermoid tumors, wherein tumor resection results in improved autonomic function.10
FUNCTIONAL BRAIN IMAGING IN GENERAL
Direct visualization of heart-brain interactions is the goal when assessing ANS function. Positron emission tomography (PET) produces quantitative images, but spatial and temporal resolutions are vastly superior with fMRI.11 Further, radiation exposure is low with fMRI, allowing for safe repeat imaging.
Ogawa et al12 first demonstrated that in vivo images of brain microvasculature are affected by blood oxygen level, and that blood oxygenation reduced vascular signal loss. Therefore, blood oxygenation level–dependent (BOLD) contrast added to MRI could complement PET-like measurements in the study of regional brain activity.
Bilateral finger tapping with intermittent periods of rest is associated with a pattern of increasing and decreasing intensity of fMRI signals in involved brain regions that reflect the periods of activity and rest. This technique has been used to locate brain voxels with similar patterns of activity, enabling the creation of familiar color brain mapping. A challenge posed by autonomic fMRI in such brain mapping is that fMRI is susceptible to artifacts (Figure 3). For example, a movement of the head as little as 1 mm inside the MRI scanner—a distance comparable to the size of autonomic structures—can produce a motion artifact (false activation of brain regions) that can affect statistical significance. In addition, many ANS regions of the brain are near osseous structures (for example the brainstem and skull base) that cause signal distortion and loss.
REQUIREMENTS FOR AUTONOMIC fMRI
The tasks chosen to visualize brain control of autonomic function must naturally elicit an autonomic response. The difficulty is that untrained persons have little or no volitional control over autonomic functions, so the task and its analysis must be designed carefully and be MRI-compatible. Any motion will degrade the image; further, the capacity for the MRI environment to corrupt the measurements can limit the potential tasks for measurement.
Possible stimuli for eliciting a sympathetic response include pain, fear, anticipation, anxiety, concentration or memory, cold pressor, Stroop test, breathing tests, and maximal hand grip. Examples of parasympathetic stimuli are the Valsalva maneuver and paced breathing. The responses to stimuli (ie, heart rate, heart rate variability, blood pressure, galvanic skin response, papillary response) must be monitored to compare the data obtained from fMRI. MRI-compatible equipment is now available for measuring many of these responses.
Identifying areas activated during tasks
Functional neuroimaging with PET and fMRI has shown consistently that the anterior cingulate is activated during multiple tasks designed to elicit an autonomic response (gambling anticipation, emotional response to faces, Stroop test).11
In a study designed to test autonomic interoceptive awareness, subjects underwent fMRI while they were asked to judge the timing of their heartbeats to auditory tones that were either synchronized with their heartbeat or delayed by 500 msec.13 Areas of enhanced activity during the task were the right insular cortex, anterior cingulate, parietal lobes, and operculum.
Characterizing brainstem sites
It is difficult to achieve visualization of areas within the brainstem that govern autonomic responses. These regions are small and motion artifacts are common because of brainstem movement with the cardiac pulse. With fMRI, Topolovec et al14 were able to characterize brainstem sites involved in autonomic control, demonstrating activation of the nucleus of the solitary tract and parabrachial nucleus.
A review of four fMRI studies of stressor-evoked blood pressure reactivity demonstrated activation in corticolimbic areas, including the cingulate cortex, insula, amygdala, and cortical and subcortical areas that are involved in hemodynamic and metabolic support for stress-related behavioral responses.16
FUNCTIONAL BRAIN IMAGING IN DISEASE STATES
There are few studies of functional brain imaging in patients with disease because of the challenges involved. The studies are difficult to perform on sick patients because of the unfriendly MRI environment, with struct requirements for attention and participation. Furthermore, autonomic responses may be blunted, making physiologic comparisons difficult. In addition, there is evidence that BOLD may be intrinsically impaired in disease states. Unlike fMRI studies to locate brain regions involved in simple tasks such as finger tapping, which can be performed in a single subject, detecting changes in autonomic responses in disease states requires averaging over studies of multiple patients.
Woo et al17 used fMRI to compare brain regions of activation in six patients with heart failure and 16 controls upon a forehead cold pressor challenge. Increases in heart rate were measured in the patients with heart failure with application of the cold stimulus. Larger neural fMRI signal responses in patients with heart failure were observed in 14 brain regions, whereas reduced fMRI activity was observed in 15 other brain regions in the heart failure patients. Based on the results, the investigators suggested that heart failure may be associated with altered sympathetic and parasympathetic activity, and that these dysfunctions might contribute to the progression of heart failure.
Gianaros et al18 found fMRI evidence for a correlation between carotid artery intima-media thickness, a surrogate measure for carotid artery or coronary artery disease, and altered ANS reaction to fear using a fearful faces paradigm.
CONCLUSION
Functional MRI of heart-brain interactions has strong potential for normal subjects, in whom the BOLD effect is small, within the limits of motion and susceptibility artifacts. Typically, such applications require averaging results over multiple subjects. Its potential utility in disease states is less significant because of the additional limitations of MRI with sick patients (the MRI environment, blunting of autonomic response in disease, possible impairment of BOLD), but continued investigation is warranted.
- Sevcencu C, Struijk JJ. Autonomic alterations and cardiac changes in epilepsy. Epilepsia 2010; 51:725–737.
- Kodounis A, Stamboulis E, Constantinidis TS, Liolios A. Measurement of autonomic dysregulation in multiple sclerosis. Acta Neurol Scand 2005; 112:403–408.
- Flachenecker P, Wolf A, Krauser M, Hartung HP, Reiners K. Cardiovascular autonomic dysfunction in multiple sclerosis: correlation with orthostatic intolerance. J Neurol 1999; 246:578–586.
- Kulcu DG, Akbas B, Citci B, Cihangiroglu M. Autonomic dysreflexia in a man with multiple sclerosis. J Spinal Cord Med 2009; 32:198–203.
- Abboud H, Berroir S, Labreuche J, Orjuele K, Amarenco O. Insular involvement in brain infarction increases risk for cardiac arrhythmia and death. Ann Neurol 2006; 59:691–699.
- Tokgozoglu SL, Batur MK, Topcuoglu MA, Saribas O, Kes S, Oto A. Effects of stroke localization on cardiac autonomic balance and sudden death. Stroke 1999; 30:1301–1311.
- Strittmatter M, Meyer S, Fischer C, Georg T, Schmitz B. Location-dependent patterns in cardio-autonomic dysfunction in ischaemic stroke. Eur Neurol 2003; 50:30–38.
- Lassman AB, Mayer SA. Paroxysmal apnea and vasomotor instability following medullary infarction. Arch Neurol 2005; 62:1286–1288.
- Deluca C, Tinazzi M, Bovi P, Rizzuto N, Moretto G. Limb ataxia and proximal intracranial territory brain infarcts: clinical and topographical correlations. J Neurol Neurosurg Psychiatry 2007; 78:832–835.
- Gómez-Esteban JC, Berganzo K, Tijero B, Barcena J, Zarranz JJ. Orthostatic hypotension associated with an epidermoid tumor of the IV ventricle. J Neurol 2009; 256:1357–1359.
- Critchley HD. Neural mechanisms of autonomic, affective, and cognitive integration. J Comp Neurol 2005; 493:154–166.
- Ogawa S, Lee TM, Kay AR, Tank DW. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proc Natl Acad Sci USA 1990; 87:9868–9872.
- Critchley HD. The human cortex responds to an interoceptive challenge. Proc Natl Acad Sci USA 2004; 101:6333–6334.
- Topolovec JC, Gati JS, Menon RS, Shoemaker JK, Cechetto DF. Human cardiovascular and gustatory brainstem sites observed by functional magnetic resonance imaging. J Comp Neurol 2004; 471:446–461.
- Napadow V, Dhond R, Conti G, Makris N, Brown EN, Barbieri R. Brain correlates of autonomic modulation: combining heart rate variability with fMRI. Neuroimage 2008; 42:169–177.
- Gianaros PJ, Sheu LK. A review of neuroimaging studies of stressor-evoked blood pressure reactivity: emerging evidence for a brain-body pathway to coronary heart disease risk. Neuroimage 2009; 47:922–936.
- Woo MA, Macey PM, Keens PT, et al Functional abnormalities in brain areas that mediate autonomic nervous system control in advanced heart failure. J Card Fail 2005; 11:437–446.
- Gianaros PJ, Hariri AR, Sheu LK, et al Preclinical atherosclerosis covaries with individual differences in reactivity and functional connectivity of the amygdala. Biol Psych 2009; 65:943–950.
- Sevcencu C, Struijk JJ. Autonomic alterations and cardiac changes in epilepsy. Epilepsia 2010; 51:725–737.
- Kodounis A, Stamboulis E, Constantinidis TS, Liolios A. Measurement of autonomic dysregulation in multiple sclerosis. Acta Neurol Scand 2005; 112:403–408.
- Flachenecker P, Wolf A, Krauser M, Hartung HP, Reiners K. Cardiovascular autonomic dysfunction in multiple sclerosis: correlation with orthostatic intolerance. J Neurol 1999; 246:578–586.
- Kulcu DG, Akbas B, Citci B, Cihangiroglu M. Autonomic dysreflexia in a man with multiple sclerosis. J Spinal Cord Med 2009; 32:198–203.
- Abboud H, Berroir S, Labreuche J, Orjuele K, Amarenco O. Insular involvement in brain infarction increases risk for cardiac arrhythmia and death. Ann Neurol 2006; 59:691–699.
- Tokgozoglu SL, Batur MK, Topcuoglu MA, Saribas O, Kes S, Oto A. Effects of stroke localization on cardiac autonomic balance and sudden death. Stroke 1999; 30:1301–1311.
- Strittmatter M, Meyer S, Fischer C, Georg T, Schmitz B. Location-dependent patterns in cardio-autonomic dysfunction in ischaemic stroke. Eur Neurol 2003; 50:30–38.
- Lassman AB, Mayer SA. Paroxysmal apnea and vasomotor instability following medullary infarction. Arch Neurol 2005; 62:1286–1288.
- Deluca C, Tinazzi M, Bovi P, Rizzuto N, Moretto G. Limb ataxia and proximal intracranial territory brain infarcts: clinical and topographical correlations. J Neurol Neurosurg Psychiatry 2007; 78:832–835.
- Gómez-Esteban JC, Berganzo K, Tijero B, Barcena J, Zarranz JJ. Orthostatic hypotension associated with an epidermoid tumor of the IV ventricle. J Neurol 2009; 256:1357–1359.
- Critchley HD. Neural mechanisms of autonomic, affective, and cognitive integration. J Comp Neurol 2005; 493:154–166.
- Ogawa S, Lee TM, Kay AR, Tank DW. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proc Natl Acad Sci USA 1990; 87:9868–9872.
- Critchley HD. The human cortex responds to an interoceptive challenge. Proc Natl Acad Sci USA 2004; 101:6333–6334.
- Topolovec JC, Gati JS, Menon RS, Shoemaker JK, Cechetto DF. Human cardiovascular and gustatory brainstem sites observed by functional magnetic resonance imaging. J Comp Neurol 2004; 471:446–461.
- Napadow V, Dhond R, Conti G, Makris N, Brown EN, Barbieri R. Brain correlates of autonomic modulation: combining heart rate variability with fMRI. Neuroimage 2008; 42:169–177.
- Gianaros PJ, Sheu LK. A review of neuroimaging studies of stressor-evoked blood pressure reactivity: emerging evidence for a brain-body pathway to coronary heart disease risk. Neuroimage 2009; 47:922–936.
- Woo MA, Macey PM, Keens PT, et al Functional abnormalities in brain areas that mediate autonomic nervous system control in advanced heart failure. J Card Fail 2005; 11:437–446.
- Gianaros PJ, Hariri AR, Sheu LK, et al Preclinical atherosclerosis covaries with individual differences in reactivity and functional connectivity of the amygdala. Biol Psych 2009; 65:943–950.
Neurohormonal control of heart failure
We have known for more than 100 years that heart failure is characterized by excessive sympathetic nervous system (SNS) activity. Thanks to refinement of this concept in the 1980s and 1990s, we now have a good understanding of SNS activity in both experimental and clinical heart failure. During those two decades, we also realized the pathophysiologic importance of the renin-angiotensin-aldosterone system (RAAS) in patients with heart failure.1 By 2000, it was obvious that heart failure was inextricably intertwined with excessive neurohormonal activity.2,3 This understanding of the pathophysiology of heart failure took on greater importance with the ability to pharmacologically block these neurohormonal systems, thereby demonstrating the detrimental role of neurohormones in the onset and progression of heart failure.
This article is a brief historical and personal description of the study of neurohormonal control mechanisms as they relate to the clinical syndrome of heart failure. The article includes a personal account of how the story unfolded in the cardiology research laboratories at the University of Minnesota.
THE EARLY YEARS: NEUROHORMONAL HYPOTHESIS
A hypothesis emerged gradually in the 1980s suggesting that progression of heart failure was in part a product of excessive SNS and RAAS activity. Many believed that pharmacologic inhibition of these systems might mitigate against progressive cardiac remodeling and thereby reduce symptoms and extend life—the so called neurohormonal hypothesis.4 SNS blockers and RAAS blockers are now widely used in tandem as first-line therapy to treat patients with heart failure,5–11 but in 1980 we were just beginning to consider their therapeutic effects.
This major shift in thinking about neurohormonal systems and heart failure did not come about quickly. Early success was driven by the ability to quickly and precisely measure neurohormones in the laboratory coupled with the availability of drugs specifically designed to block the SNS and RAAS. It was also critically important to embrace the power of randomized controlled trials to test new therapies. Investigators, research nurses, and patients from many medical centers and laboratories should be credited with this astonishing success. I am proud to have been a part of this activity at the University of Minnesota.
THE COHN LABORATORY
Early work done in the 1960s by numerous investigators noted that the failing left ventricle (LV) was exquisitely sensitive to afterload conditions.12–15 John Ross and Eugene Braunwald explored this observation in patients in 1964.15 Jay Cohn, with his unique background in hypertension and hemodynamics, brought the concept back into the laboratory in the early 1970s, where he explored the mechanisms responsible for increased sensitivity to afterload in patients with heart failure.16
I had the good fortune to join Cohn’s laboratory in 1979, when this avenue of heart failure research was in full bloom. A team of investigators was gradually assembled that included Maria Teresa Olivari, who relocated from the Cardiovascular Research Institute in Milan, Italy, directed by Maurizio D. Guazzi. Also joining the group were T. Barry Levine from the University of Michigan, Ann Arbor; Steven Goldsmith from Ohio State University, Columbus; Susan Ziesche from the Minneapolis Veterans Affairs (VA) Medical Center; Thomas Rector, an expert statistician and pharmacologist at the University of Minnesota; and many research fellows, visitors, students, biochemists, statisticians, and research nurses. Joseph Franciosa joined the University of Minnesota group in 1974 and, after completing several important trials, left in 1979 to lead the cardiology group at the Philadelphia VA Medical Center.
The Cohn group developed a working hypothesis that activation of the SNS and RAAS in heart failure was most likely an adaptive mechanism intended for short-term circulatory support, such as in the setting of blood loss, dehydration, shock, volume depletion, or flight response. In patients with heart failure, according to the hypothesis, the SNS and RAAS activity persisted beyond that needed for adaptation, with chronic release of norepinephrine (NE), renin, angiotensin II, aldosterone, and other neurohormones. The neurohormones ultimately became “maladaptive.” Thanks to the assaying skills of Ada Simon, we had the early advantage of precise and rapid radioenzyme measurement of plasma norepinephrine and renin activity in the blood of patients and animals.
We believed that neurohormonal activation contributed in part to the excessive afterload conditions observed in heart failure. We also thought that excessive neurohormonal activation directly impaired cardiac systolic function. The obvious next step was to explore whether neurohormonal antagonists would improve myocardial performance.
Under the leadership of Steven Goldsmith, many studies were performed to investigate reflex control mechanisms and their pathogenic role in patients with heart failure. The accumulating data suggested that persistent, excessive neurohormonal activity was characteristic of heart failure and that it was associated with a poor prognosis.17 The precise mechanism that drives activation of the SNS remained elusive, however, and is poorly defined even today. In that era, when β-adrenergic blockers were believed to be contraindicated, we inhibited the central SNS with bromocriptine, clonidine, and guanfacine with modestly favorable responses. We inhibited circulating arginine vasopressin antibody (thanks to Prof. Alan Cowley for noting an acute favorable response).
THE PHARMACOLOGIC ERA
The 1980s and 1990s saw the availability of several pharmacologic tools for assessing the roles of the SNS and RAAS in heart failure. The hypotensive effects of angiotensin-converting enzyme (ACE) inhibitors and, later, angiotensin-receptor blockers (ARBs) were sources of concern, since many patients with advanced heart failure had low- to normal-range blood pressures before they received RAAS blockers. However, our group as well as others observed that abrupt blood pressure reduction occurred primarily in patients with very hyperreninemic responses to intravenous diuretics (ie, volume-depleted patients). Eventually, we learned that low baseline blood pressure did not adversely affect outcomes when vasodilators were used in patients with heart failure,18,19 leading us to titrate these drugs upward over days to weeks.
Several different combinations of vasodilators were used successfully to treat heart failure, including hydralazine, isosorbide dinitrate,20 ACE inhibitors,21,22 and ARBs.8,23–28 Direct-acting calcium channel blocking vasodilators, such as amlodipine, did not improve survival in patients with systolic heart failure, although they appeared to be safe in this setting.29 The aldosterone receptor blockers spironolactone30 and eplerenone31 were later demonstrated to improve survival of patients with advanced systolic heart failure when added to vasodilator therapy.
By the end of the 1990s, it was evident that drugs that blocked the SNS and RAAS were not just vasodilators or “afterload reducers,” similar to α-blockers, hydralazine, nitrates, and amlodipine. Neurohormonal blockers were doing something profoundly beneficial not observed with more direct-acting vasodilators.32–37 Simple afterload reduction was not enough in patients with systolic heart failure.
Neurohormonal antagonists were acting more directly on the myocardium. They were preventing the progression of LV remodeling and, in some cases, promoting reverse remodeling, thus improving myocardial function and favorably influencing the natural history of heart failure.31–39 We were astonished to discover that the failing, dilated heart could revert to normal size in response to neurohormone blockade with ACE inhibitors and β-adrenergic blockers; these findings were soon reported by other laboratories as well.
Contrary to our concept of heart failure in the 1970s, we now understood that the heart has inherent plasticity. It can dilate in response to abnormal loading conditions or myocardial injury, and it can restore itself to normal size when neurohormones are blocked and perverse loading conditions are improved. This reversal can occur spontaneously if an offending agent such as chronic alcohol use or inflammation is removed, but it is likely facilitated by SNS and RAAS blockers.
THE REMODELING ERA
Ken McDonald joined the University of Minnesota lab in 1989 as a research fellow. His skill in conducting both animal and clinical mechanistic studies was pivotal to our achieving our research goals. The inspired animal work by Boston-based Marc and Janice Pfeffer revealed the significance of the LV remodeling concept in the development of heart failure36: ventricular remodeling was a hallmark of systolic heart failure, and pharmacologic inhibition of LV remodeling by blocking neurohormones had profound clinical implications.
Under the direction of Wenda Carlyle, a molecular biology laboratory was established at the University of Minnesota whose work was dedicated solely to exploration of remodeling at a very basic level. Alan Hirsch was recruited from Victor Dzau’s laboratory at Brigham and Women’s Hospital in Boston to extend our efforts to understand the molecular basis of cardiac remodeling. Ken McDonald guided the use of magnetic resonance imaging to study remodeling in dogs.
The late 1970s saw the initiation and eventual execution of several important clinical trials, including the Vasodilator Heart Failure Trials (V-HeFT I and V-HeFT II)40,41 under our leadership, and Studies of Left Ventricular Dysfunction (SOLVD)5,6 under the leadership of Salim Yusuf and others at the National Heart Lung and Blood Institute (NHLBI). Many neuro hormonal and remodeling substudies sprang from these large clinical trials. Spencer Kubo joined our group from the Medical College of Cornell University in the mid-1980s, and he immediately demonstrated his prowess in clinical research. He also recruited Alan Bank to study the endothelium in both experimental and human heart failure.
Integrating the molecular, animal, and clinical laboratories allowed us to pursue many mechanistic studies. Laboratory meetings, often held on Saturday mornings, generated ideas for program projects that were subsequently funded by NHLBI. Birthday parties and other social events with laboratory staff and their families were part of our fabric. Late-night trips to the Post Office to send off abstracts for national meetings before the midnight deadline were a regular feature.
Our coordination of and participation in the large clinical trials allowed us to meet frequently in Bethesda with colleagues from other major centers, fostering many collaborations and friendships that continue to thrive. Susan Ziesche deserves much of the credit for coordinating many groups that were part of these large, complex trials. Cheryl Yano, our administrator, also played a key role. All National Institutes of Health (NIH) grants passed through Cheryl, and she worked tirelessly to ensure that the proposals were in the best possible shape before we submitted them. Inder Anand joined our group in the early 1990s and became a major analytical force. Jay Cohn was the intellectual leader of the group, as well as our soul and inspiration. People worked hard for him, and he taught us much in a setting that valued creativity and new ideas above all.
THE LATER YEARS
By 1997, the face of heart failure had changed. New treatments were effective, but there were new challenges to face. I moved that year to the Cleveland Clinic, where I spent 11 enjoyable and productive years. I returned to Minnesota in 2008 to help build a new cardiovascular division.
It is gratifying to look back and see what has become of the “neurohormonal hypothesis.” Today, nearly all major medical centers have heart failure programs, and certification in advanced heart failure/heart transplantation is a reality. Training programs in advanced heart failure and heart transplant are common. The Heart Failure Society of America sprang up in the early 1990s, dedicated to patients with heart failure. Jay Cohn founded the Journal of Cardiac Failure, which flourished under his leadership. Neurohormonal blockers are now considered standard, conventional therapy and are widely used throughout the world.
CONCLUSIONS
Still, there is much work to do. An increasing number of devices are being developed, largely for patients with more advanced heart failure, but attention is also being directed to prevention of heart failure. Identification and possible treatment of patients at risk for the development of heart failure, and identification of those who already have some early structural and functional perturbation without advanced symptoms, are critically important. Since event rates are so low in these patients, we need to create new strategies for studying interventions. In the long term, the best treatment for nearly any condition is early diagnosis and perhaps early treatment with a goal of prevention.
One consequence of our progress over the years may be that heart failure now primarily affects a more elderly group—patients who often have many associated comorbidities. The consequences include more frequent readmissions, large numbers of patients with intractable signs and symptoms, and the emergence of difficult end-of-life decisions. If we could truly prevent heart failure rather than forestall its emergence to a later point in life, perhaps we could do more good.
For me, the study of neurohormonal mechanisms in the setting of heart failure was the centerpiece of my early career. Jay Cohn had asked several of us early in our laboratory experience to choose a neurohormonal system and learn about it in great depth and detail. My assignment was the SNS. Since then, I have never tired of learning about its control mechanisms, how it achieves circulatory homeostasis, how its excess quantities can be directly toxic to the heart, and the variety of pharmacologic ways that we can control it. I am indeed fortunate to have been part of this amazing study group.
- Dzau VJ, Colucci WS, Hollenberg NK, Williams GH. Relation of the renin-angiotensin-aldosterone system to clinical state in congestive heart failure. Circulation 1981; 63:645–651.
- Francis GS, Goldsmith SR, Levine TB, Olivari MT, Cohn JN. The neurohumoral axis in congestive heart failure. Ann Intern Med 1984; 101:370–377.
- Levine TB, Francis GS, Goldsmith SR, Simon AB, Cohn JN. Activity of the sympathetic nervous system and renin-angiotensin system assessed by plasma hormone levels and their relation to hemodynamic abnormalities in congestive heart failure. Am J Cardiol 1982; 49:1659–1666.
- Packer M. The neurohormonal hypothesis: a theory to explain the mechanism of disease progression in heart failure. J Am Coll Cardiol 1992; 20:248–254.
- The SOLVD Investigators. Effect of enalapril on survival in patients with reduced left ventricular ejection fraction and congestive heart failure. N Engl J Med 1991; 325:293–302.
- The SOLVD Investigators. Effect of enalapril on mortality and the development of heart failure in asymptomatic patients with reduced left ventricular ejection fractions. N Engl J Med 1992; 327:685–691.
- Pitt B, Zannand F, Remme WJ, et al The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med 1999; 341:709–717.
- ONTARGET Investigators. Telmisartan, ramipril, or both in patients at high risk for vascular events. N Engl J Med 2008; 358:1547–1559.
- CIBIS Investigators and Committees. A randomized trial of β-blockade in heart failure: the Cardiac Insufficiency Bisoprolol Study (CIBIS). Circulation 1994; 90:1765–1773.
- Hjalmarson A, Goldstein S, Fagerberg B, et al Effects of controlled-release metoprolol on total mortality, hospitalizations, and well-being in patients with heart failure: the Metoprolol CR/XL Randomized Intervention Trial in congestive heart failure (MERITHF). JAMA 2000; 283:1295–1302.
- Packer M, Fowler MB, Roecker EB, et al Effect of carvedilol on the morbidity of patients with severe chronic heart failure: results of the carvedilol prospective randomized cumulative survival (COPERNICUS) study. Circulation 2002; 106:2194–2199.
- Imperial ES, Levy MN, Zieske H. Outflow resistance as an independent determinant of cardiac performance. Circ Res 1961; 9:1148–1155.
- Sonnenblick EH, Downing SE. Afterload as a primary determinant of ventricular performance. Am J Physiol 1963; 204:604–610.
- Wilcken DE, Charlier AA, Hoffman JI. Effects of alterations in aortic impedance on the performance of the ventricles. Circ Res 1964; 14:283–293.
- Ross J, Braunwald E. The study of left ventricular function in man by increasing resistance to ventricular ejection with angiotensin. Circulation 1964; 29:739–749.
- Cohn JN. Blood pressure and cardiac performance. Am J Med 1973; 55:351–361.
- Cohn JN, Levine TB, Olivari MT, et al Plasma norepinephrine as a guide to prognosis in patients with chronic congestive heart failure. N Engl J Med 1984; 311:819–823.
- Anand IS, Tam SW, Rector TS, et al Influence of blood pressure on the effectiveness of a fixed-dose combination of isosorbide dinitrate and hydralazine in the African-American Heart Failure Trial. J Am Coll Cardiol 2007; 49:32–39.
- Rouleau JL, Roecker EB, Tendra M, et al Influence of pretreatment systolic blood pressure on the effect of carvedilol in patients with severe chronic heart failure: the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) study. J Am Coll Cardiol 2004; 43:1423–1429.
- Taylor AL, Ziesche S, Yancy C, et al Combination of isosorbide dinitrate and hydralazine in blacks with heart failure. N Engl J Med 2004; 351:2049–2057.
- Captopril Multicenter Research Group. A placebo-controlled trial of captopril in refractory chronic congestive heart failure. J Am Coll Cardiol 1983; 2:755–763.
- Pfeffer MA, Braunwald E, Moyé LA, et al Effect of captopril on mortality and morbidity in patients with left ventricular dysfunction after myocardial infarction: results of the survival and ventricular enlargement trial—the SAVE Investigators. N Eng J Med 1992; 327:669–677.
- Curtiss C, Cohn JN, Vrobel T, Franciosa J. Role of the renin-angiotensin system in the systemic vasoconstriction of chronic congestive heart failure. Circulation 1978; 58:763–770.
- Cohn JN, Tognoni G. A randomized trial of the angiotensin-receptor blocker valsartan in chronic heart failure. N Engl J Med 2001; 345:1667–1675.
- Young JB, Dunlap ME, Pfeffer MA, et al Mortality and morbidity reduction with Candesartan in patients with chronic heart failure and left ventricular systolic dysfunction: results of the CHARM low-left ventricular ejection trials. Circulation 2004; 110:2618–2626.
- Pfeffer MA, McMurray JJ, Velazquez EJ, et al Valsartan, captopril, or both in myocardial infarction complicated by heart failure, left ventricular dysfunction, or both. N Engl J Med 2003; 349:1893–1906.
- ONTARGET Investigators. Telmisartan, ramipril, or both in patients at high risk for vascular events. N Engl J Med 2008; 358:1547–1559.
- Konstam MA, Neaton JD, Dickstein K, et al Effects of high-dose versus lose-dose losartan on clinical outcomes in patients with heart failure (HEAAL study): a randomized, double-blind trial. Lancet 2009; 374:1840–1848.
- Packer M. Prospective randomized amlodipine survival evaluation 2. Presented at: 49th American College of Cardiology meeting; March 2000; Anaheim, CA.
- Pitt B, Zannand F, Remme WJ, et al The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med 1999; 341:709–717.
- Pitt B, Remme W, Zannand F, et al Eplerenone, a selective aldosterone blocker in patients with left ventricular dysfunction after myocardial infarction. N Engl J Med 2003; 348:1309–1321.
- Cohn JN. Structural basis for heart failure: ventricular remodeling and its pharmacological inhibition. Circulation 1995; 91:2504–2507.
- Cohn JN, Ferrari R, Sharpe N. Cardiac remodeling—concepts and clinical implications: a consensus paper from an international forum on cardiac remodeling. J Am Coll Cardiol 2000; 35:569–581.
- Konstam MA, Kronenberg MW, Rousseau MF, et al Effects of the angiotensin converting enzyme inhibitor enalapril on the long-term progression of left ventricular dilation in patients with asymptomatic systolic dysfunction. Circulation 1993; 88:2277–2283.
- Greenberg B, Quinones MA, Koilpillai C, et al Effects of long-term enalapril therapy on cardiac structure and function in patients with left ventricular dysfunction: results of the SOLVD echocardiography substudy. Circulation 1995; 91:2573–2581.
- Pfeffer JM, Pfeffer MA, Braunwald E. Influence of chronic captopril therapy on the infarcted left ventricle of the rat. Circ Res 1985; 57:84–95.
- Cohn JN. Structural basis for heart failure: ventricular remodeling and its pharmacological inhibition. Circulation 1995; 91:2504–2507.
- McDonald KM, Garr M, Carlyle PF, et al Relative effects of α1-adrenoceptor blockade, converting enzyme inhibitor therapy, and angiotensin II sub-type 1 receptor blockade on ventricular remodeling in the dog. Circulation 1994; 90:3034–3046.
- Pfeffer MA, Braunwald E. Ventricular remodeling after myocardial infarction. Experimental observations and clinical implications. Circulation 1990; 81:1161–1172.
- Cohn JN, Archibald DG, Ziesche S, et al Effect of vasodilator therapy on mortality in chronic congestive heart failure. N Engl J Med 1986; 314:1547–1552.
- Cohn JN, Johnson G, Ziesche S, et al A comparison of enalapril with hydralazine–isosorbide dinitrate in the treatment of chronic congestive heart failure. N Engl J Med 1991; 325:303–310.
We have known for more than 100 years that heart failure is characterized by excessive sympathetic nervous system (SNS) activity. Thanks to refinement of this concept in the 1980s and 1990s, we now have a good understanding of SNS activity in both experimental and clinical heart failure. During those two decades, we also realized the pathophysiologic importance of the renin-angiotensin-aldosterone system (RAAS) in patients with heart failure.1 By 2000, it was obvious that heart failure was inextricably intertwined with excessive neurohormonal activity.2,3 This understanding of the pathophysiology of heart failure took on greater importance with the ability to pharmacologically block these neurohormonal systems, thereby demonstrating the detrimental role of neurohormones in the onset and progression of heart failure.
This article is a brief historical and personal description of the study of neurohormonal control mechanisms as they relate to the clinical syndrome of heart failure. The article includes a personal account of how the story unfolded in the cardiology research laboratories at the University of Minnesota.
THE EARLY YEARS: NEUROHORMONAL HYPOTHESIS
A hypothesis emerged gradually in the 1980s suggesting that progression of heart failure was in part a product of excessive SNS and RAAS activity. Many believed that pharmacologic inhibition of these systems might mitigate against progressive cardiac remodeling and thereby reduce symptoms and extend life—the so called neurohormonal hypothesis.4 SNS blockers and RAAS blockers are now widely used in tandem as first-line therapy to treat patients with heart failure,5–11 but in 1980 we were just beginning to consider their therapeutic effects.
This major shift in thinking about neurohormonal systems and heart failure did not come about quickly. Early success was driven by the ability to quickly and precisely measure neurohormones in the laboratory coupled with the availability of drugs specifically designed to block the SNS and RAAS. It was also critically important to embrace the power of randomized controlled trials to test new therapies. Investigators, research nurses, and patients from many medical centers and laboratories should be credited with this astonishing success. I am proud to have been a part of this activity at the University of Minnesota.
THE COHN LABORATORY
Early work done in the 1960s by numerous investigators noted that the failing left ventricle (LV) was exquisitely sensitive to afterload conditions.12–15 John Ross and Eugene Braunwald explored this observation in patients in 1964.15 Jay Cohn, with his unique background in hypertension and hemodynamics, brought the concept back into the laboratory in the early 1970s, where he explored the mechanisms responsible for increased sensitivity to afterload in patients with heart failure.16
I had the good fortune to join Cohn’s laboratory in 1979, when this avenue of heart failure research was in full bloom. A team of investigators was gradually assembled that included Maria Teresa Olivari, who relocated from the Cardiovascular Research Institute in Milan, Italy, directed by Maurizio D. Guazzi. Also joining the group were T. Barry Levine from the University of Michigan, Ann Arbor; Steven Goldsmith from Ohio State University, Columbus; Susan Ziesche from the Minneapolis Veterans Affairs (VA) Medical Center; Thomas Rector, an expert statistician and pharmacologist at the University of Minnesota; and many research fellows, visitors, students, biochemists, statisticians, and research nurses. Joseph Franciosa joined the University of Minnesota group in 1974 and, after completing several important trials, left in 1979 to lead the cardiology group at the Philadelphia VA Medical Center.
The Cohn group developed a working hypothesis that activation of the SNS and RAAS in heart failure was most likely an adaptive mechanism intended for short-term circulatory support, such as in the setting of blood loss, dehydration, shock, volume depletion, or flight response. In patients with heart failure, according to the hypothesis, the SNS and RAAS activity persisted beyond that needed for adaptation, with chronic release of norepinephrine (NE), renin, angiotensin II, aldosterone, and other neurohormones. The neurohormones ultimately became “maladaptive.” Thanks to the assaying skills of Ada Simon, we had the early advantage of precise and rapid radioenzyme measurement of plasma norepinephrine and renin activity in the blood of patients and animals.
We believed that neurohormonal activation contributed in part to the excessive afterload conditions observed in heart failure. We also thought that excessive neurohormonal activation directly impaired cardiac systolic function. The obvious next step was to explore whether neurohormonal antagonists would improve myocardial performance.
Under the leadership of Steven Goldsmith, many studies were performed to investigate reflex control mechanisms and their pathogenic role in patients with heart failure. The accumulating data suggested that persistent, excessive neurohormonal activity was characteristic of heart failure and that it was associated with a poor prognosis.17 The precise mechanism that drives activation of the SNS remained elusive, however, and is poorly defined even today. In that era, when β-adrenergic blockers were believed to be contraindicated, we inhibited the central SNS with bromocriptine, clonidine, and guanfacine with modestly favorable responses. We inhibited circulating arginine vasopressin antibody (thanks to Prof. Alan Cowley for noting an acute favorable response).
THE PHARMACOLOGIC ERA
The 1980s and 1990s saw the availability of several pharmacologic tools for assessing the roles of the SNS and RAAS in heart failure. The hypotensive effects of angiotensin-converting enzyme (ACE) inhibitors and, later, angiotensin-receptor blockers (ARBs) were sources of concern, since many patients with advanced heart failure had low- to normal-range blood pressures before they received RAAS blockers. However, our group as well as others observed that abrupt blood pressure reduction occurred primarily in patients with very hyperreninemic responses to intravenous diuretics (ie, volume-depleted patients). Eventually, we learned that low baseline blood pressure did not adversely affect outcomes when vasodilators were used in patients with heart failure,18,19 leading us to titrate these drugs upward over days to weeks.
Several different combinations of vasodilators were used successfully to treat heart failure, including hydralazine, isosorbide dinitrate,20 ACE inhibitors,21,22 and ARBs.8,23–28 Direct-acting calcium channel blocking vasodilators, such as amlodipine, did not improve survival in patients with systolic heart failure, although they appeared to be safe in this setting.29 The aldosterone receptor blockers spironolactone30 and eplerenone31 were later demonstrated to improve survival of patients with advanced systolic heart failure when added to vasodilator therapy.
By the end of the 1990s, it was evident that drugs that blocked the SNS and RAAS were not just vasodilators or “afterload reducers,” similar to α-blockers, hydralazine, nitrates, and amlodipine. Neurohormonal blockers were doing something profoundly beneficial not observed with more direct-acting vasodilators.32–37 Simple afterload reduction was not enough in patients with systolic heart failure.
Neurohormonal antagonists were acting more directly on the myocardium. They were preventing the progression of LV remodeling and, in some cases, promoting reverse remodeling, thus improving myocardial function and favorably influencing the natural history of heart failure.31–39 We were astonished to discover that the failing, dilated heart could revert to normal size in response to neurohormone blockade with ACE inhibitors and β-adrenergic blockers; these findings were soon reported by other laboratories as well.
Contrary to our concept of heart failure in the 1970s, we now understood that the heart has inherent plasticity. It can dilate in response to abnormal loading conditions or myocardial injury, and it can restore itself to normal size when neurohormones are blocked and perverse loading conditions are improved. This reversal can occur spontaneously if an offending agent such as chronic alcohol use or inflammation is removed, but it is likely facilitated by SNS and RAAS blockers.
THE REMODELING ERA
Ken McDonald joined the University of Minnesota lab in 1989 as a research fellow. His skill in conducting both animal and clinical mechanistic studies was pivotal to our achieving our research goals. The inspired animal work by Boston-based Marc and Janice Pfeffer revealed the significance of the LV remodeling concept in the development of heart failure36: ventricular remodeling was a hallmark of systolic heart failure, and pharmacologic inhibition of LV remodeling by blocking neurohormones had profound clinical implications.
Under the direction of Wenda Carlyle, a molecular biology laboratory was established at the University of Minnesota whose work was dedicated solely to exploration of remodeling at a very basic level. Alan Hirsch was recruited from Victor Dzau’s laboratory at Brigham and Women’s Hospital in Boston to extend our efforts to understand the molecular basis of cardiac remodeling. Ken McDonald guided the use of magnetic resonance imaging to study remodeling in dogs.
The late 1970s saw the initiation and eventual execution of several important clinical trials, including the Vasodilator Heart Failure Trials (V-HeFT I and V-HeFT II)40,41 under our leadership, and Studies of Left Ventricular Dysfunction (SOLVD)5,6 under the leadership of Salim Yusuf and others at the National Heart Lung and Blood Institute (NHLBI). Many neuro hormonal and remodeling substudies sprang from these large clinical trials. Spencer Kubo joined our group from the Medical College of Cornell University in the mid-1980s, and he immediately demonstrated his prowess in clinical research. He also recruited Alan Bank to study the endothelium in both experimental and human heart failure.
Integrating the molecular, animal, and clinical laboratories allowed us to pursue many mechanistic studies. Laboratory meetings, often held on Saturday mornings, generated ideas for program projects that were subsequently funded by NHLBI. Birthday parties and other social events with laboratory staff and their families were part of our fabric. Late-night trips to the Post Office to send off abstracts for national meetings before the midnight deadline were a regular feature.
Our coordination of and participation in the large clinical trials allowed us to meet frequently in Bethesda with colleagues from other major centers, fostering many collaborations and friendships that continue to thrive. Susan Ziesche deserves much of the credit for coordinating many groups that were part of these large, complex trials. Cheryl Yano, our administrator, also played a key role. All National Institutes of Health (NIH) grants passed through Cheryl, and she worked tirelessly to ensure that the proposals were in the best possible shape before we submitted them. Inder Anand joined our group in the early 1990s and became a major analytical force. Jay Cohn was the intellectual leader of the group, as well as our soul and inspiration. People worked hard for him, and he taught us much in a setting that valued creativity and new ideas above all.
THE LATER YEARS
By 1997, the face of heart failure had changed. New treatments were effective, but there were new challenges to face. I moved that year to the Cleveland Clinic, where I spent 11 enjoyable and productive years. I returned to Minnesota in 2008 to help build a new cardiovascular division.
It is gratifying to look back and see what has become of the “neurohormonal hypothesis.” Today, nearly all major medical centers have heart failure programs, and certification in advanced heart failure/heart transplantation is a reality. Training programs in advanced heart failure and heart transplant are common. The Heart Failure Society of America sprang up in the early 1990s, dedicated to patients with heart failure. Jay Cohn founded the Journal of Cardiac Failure, which flourished under his leadership. Neurohormonal blockers are now considered standard, conventional therapy and are widely used throughout the world.
CONCLUSIONS
Still, there is much work to do. An increasing number of devices are being developed, largely for patients with more advanced heart failure, but attention is also being directed to prevention of heart failure. Identification and possible treatment of patients at risk for the development of heart failure, and identification of those who already have some early structural and functional perturbation without advanced symptoms, are critically important. Since event rates are so low in these patients, we need to create new strategies for studying interventions. In the long term, the best treatment for nearly any condition is early diagnosis and perhaps early treatment with a goal of prevention.
One consequence of our progress over the years may be that heart failure now primarily affects a more elderly group—patients who often have many associated comorbidities. The consequences include more frequent readmissions, large numbers of patients with intractable signs and symptoms, and the emergence of difficult end-of-life decisions. If we could truly prevent heart failure rather than forestall its emergence to a later point in life, perhaps we could do more good.
For me, the study of neurohormonal mechanisms in the setting of heart failure was the centerpiece of my early career. Jay Cohn had asked several of us early in our laboratory experience to choose a neurohormonal system and learn about it in great depth and detail. My assignment was the SNS. Since then, I have never tired of learning about its control mechanisms, how it achieves circulatory homeostasis, how its excess quantities can be directly toxic to the heart, and the variety of pharmacologic ways that we can control it. I am indeed fortunate to have been part of this amazing study group.
We have known for more than 100 years that heart failure is characterized by excessive sympathetic nervous system (SNS) activity. Thanks to refinement of this concept in the 1980s and 1990s, we now have a good understanding of SNS activity in both experimental and clinical heart failure. During those two decades, we also realized the pathophysiologic importance of the renin-angiotensin-aldosterone system (RAAS) in patients with heart failure.1 By 2000, it was obvious that heart failure was inextricably intertwined with excessive neurohormonal activity.2,3 This understanding of the pathophysiology of heart failure took on greater importance with the ability to pharmacologically block these neurohormonal systems, thereby demonstrating the detrimental role of neurohormones in the onset and progression of heart failure.
This article is a brief historical and personal description of the study of neurohormonal control mechanisms as they relate to the clinical syndrome of heart failure. The article includes a personal account of how the story unfolded in the cardiology research laboratories at the University of Minnesota.
THE EARLY YEARS: NEUROHORMONAL HYPOTHESIS
A hypothesis emerged gradually in the 1980s suggesting that progression of heart failure was in part a product of excessive SNS and RAAS activity. Many believed that pharmacologic inhibition of these systems might mitigate against progressive cardiac remodeling and thereby reduce symptoms and extend life—the so called neurohormonal hypothesis.4 SNS blockers and RAAS blockers are now widely used in tandem as first-line therapy to treat patients with heart failure,5–11 but in 1980 we were just beginning to consider their therapeutic effects.
This major shift in thinking about neurohormonal systems and heart failure did not come about quickly. Early success was driven by the ability to quickly and precisely measure neurohormones in the laboratory coupled with the availability of drugs specifically designed to block the SNS and RAAS. It was also critically important to embrace the power of randomized controlled trials to test new therapies. Investigators, research nurses, and patients from many medical centers and laboratories should be credited with this astonishing success. I am proud to have been a part of this activity at the University of Minnesota.
THE COHN LABORATORY
Early work done in the 1960s by numerous investigators noted that the failing left ventricle (LV) was exquisitely sensitive to afterload conditions.12–15 John Ross and Eugene Braunwald explored this observation in patients in 1964.15 Jay Cohn, with his unique background in hypertension and hemodynamics, brought the concept back into the laboratory in the early 1970s, where he explored the mechanisms responsible for increased sensitivity to afterload in patients with heart failure.16
I had the good fortune to join Cohn’s laboratory in 1979, when this avenue of heart failure research was in full bloom. A team of investigators was gradually assembled that included Maria Teresa Olivari, who relocated from the Cardiovascular Research Institute in Milan, Italy, directed by Maurizio D. Guazzi. Also joining the group were T. Barry Levine from the University of Michigan, Ann Arbor; Steven Goldsmith from Ohio State University, Columbus; Susan Ziesche from the Minneapolis Veterans Affairs (VA) Medical Center; Thomas Rector, an expert statistician and pharmacologist at the University of Minnesota; and many research fellows, visitors, students, biochemists, statisticians, and research nurses. Joseph Franciosa joined the University of Minnesota group in 1974 and, after completing several important trials, left in 1979 to lead the cardiology group at the Philadelphia VA Medical Center.
The Cohn group developed a working hypothesis that activation of the SNS and RAAS in heart failure was most likely an adaptive mechanism intended for short-term circulatory support, such as in the setting of blood loss, dehydration, shock, volume depletion, or flight response. In patients with heart failure, according to the hypothesis, the SNS and RAAS activity persisted beyond that needed for adaptation, with chronic release of norepinephrine (NE), renin, angiotensin II, aldosterone, and other neurohormones. The neurohormones ultimately became “maladaptive.” Thanks to the assaying skills of Ada Simon, we had the early advantage of precise and rapid radioenzyme measurement of plasma norepinephrine and renin activity in the blood of patients and animals.
We believed that neurohormonal activation contributed in part to the excessive afterload conditions observed in heart failure. We also thought that excessive neurohormonal activation directly impaired cardiac systolic function. The obvious next step was to explore whether neurohormonal antagonists would improve myocardial performance.
Under the leadership of Steven Goldsmith, many studies were performed to investigate reflex control mechanisms and their pathogenic role in patients with heart failure. The accumulating data suggested that persistent, excessive neurohormonal activity was characteristic of heart failure and that it was associated with a poor prognosis.17 The precise mechanism that drives activation of the SNS remained elusive, however, and is poorly defined even today. In that era, when β-adrenergic blockers were believed to be contraindicated, we inhibited the central SNS with bromocriptine, clonidine, and guanfacine with modestly favorable responses. We inhibited circulating arginine vasopressin antibody (thanks to Prof. Alan Cowley for noting an acute favorable response).
THE PHARMACOLOGIC ERA
The 1980s and 1990s saw the availability of several pharmacologic tools for assessing the roles of the SNS and RAAS in heart failure. The hypotensive effects of angiotensin-converting enzyme (ACE) inhibitors and, later, angiotensin-receptor blockers (ARBs) were sources of concern, since many patients with advanced heart failure had low- to normal-range blood pressures before they received RAAS blockers. However, our group as well as others observed that abrupt blood pressure reduction occurred primarily in patients with very hyperreninemic responses to intravenous diuretics (ie, volume-depleted patients). Eventually, we learned that low baseline blood pressure did not adversely affect outcomes when vasodilators were used in patients with heart failure,18,19 leading us to titrate these drugs upward over days to weeks.
Several different combinations of vasodilators were used successfully to treat heart failure, including hydralazine, isosorbide dinitrate,20 ACE inhibitors,21,22 and ARBs.8,23–28 Direct-acting calcium channel blocking vasodilators, such as amlodipine, did not improve survival in patients with systolic heart failure, although they appeared to be safe in this setting.29 The aldosterone receptor blockers spironolactone30 and eplerenone31 were later demonstrated to improve survival of patients with advanced systolic heart failure when added to vasodilator therapy.
By the end of the 1990s, it was evident that drugs that blocked the SNS and RAAS were not just vasodilators or “afterload reducers,” similar to α-blockers, hydralazine, nitrates, and amlodipine. Neurohormonal blockers were doing something profoundly beneficial not observed with more direct-acting vasodilators.32–37 Simple afterload reduction was not enough in patients with systolic heart failure.
Neurohormonal antagonists were acting more directly on the myocardium. They were preventing the progression of LV remodeling and, in some cases, promoting reverse remodeling, thus improving myocardial function and favorably influencing the natural history of heart failure.31–39 We were astonished to discover that the failing, dilated heart could revert to normal size in response to neurohormone blockade with ACE inhibitors and β-adrenergic blockers; these findings were soon reported by other laboratories as well.
Contrary to our concept of heart failure in the 1970s, we now understood that the heart has inherent plasticity. It can dilate in response to abnormal loading conditions or myocardial injury, and it can restore itself to normal size when neurohormones are blocked and perverse loading conditions are improved. This reversal can occur spontaneously if an offending agent such as chronic alcohol use or inflammation is removed, but it is likely facilitated by SNS and RAAS blockers.
THE REMODELING ERA
Ken McDonald joined the University of Minnesota lab in 1989 as a research fellow. His skill in conducting both animal and clinical mechanistic studies was pivotal to our achieving our research goals. The inspired animal work by Boston-based Marc and Janice Pfeffer revealed the significance of the LV remodeling concept in the development of heart failure36: ventricular remodeling was a hallmark of systolic heart failure, and pharmacologic inhibition of LV remodeling by blocking neurohormones had profound clinical implications.
Under the direction of Wenda Carlyle, a molecular biology laboratory was established at the University of Minnesota whose work was dedicated solely to exploration of remodeling at a very basic level. Alan Hirsch was recruited from Victor Dzau’s laboratory at Brigham and Women’s Hospital in Boston to extend our efforts to understand the molecular basis of cardiac remodeling. Ken McDonald guided the use of magnetic resonance imaging to study remodeling in dogs.
The late 1970s saw the initiation and eventual execution of several important clinical trials, including the Vasodilator Heart Failure Trials (V-HeFT I and V-HeFT II)40,41 under our leadership, and Studies of Left Ventricular Dysfunction (SOLVD)5,6 under the leadership of Salim Yusuf and others at the National Heart Lung and Blood Institute (NHLBI). Many neuro hormonal and remodeling substudies sprang from these large clinical trials. Spencer Kubo joined our group from the Medical College of Cornell University in the mid-1980s, and he immediately demonstrated his prowess in clinical research. He also recruited Alan Bank to study the endothelium in both experimental and human heart failure.
Integrating the molecular, animal, and clinical laboratories allowed us to pursue many mechanistic studies. Laboratory meetings, often held on Saturday mornings, generated ideas for program projects that were subsequently funded by NHLBI. Birthday parties and other social events with laboratory staff and their families were part of our fabric. Late-night trips to the Post Office to send off abstracts for national meetings before the midnight deadline were a regular feature.
Our coordination of and participation in the large clinical trials allowed us to meet frequently in Bethesda with colleagues from other major centers, fostering many collaborations and friendships that continue to thrive. Susan Ziesche deserves much of the credit for coordinating many groups that were part of these large, complex trials. Cheryl Yano, our administrator, also played a key role. All National Institutes of Health (NIH) grants passed through Cheryl, and she worked tirelessly to ensure that the proposals were in the best possible shape before we submitted them. Inder Anand joined our group in the early 1990s and became a major analytical force. Jay Cohn was the intellectual leader of the group, as well as our soul and inspiration. People worked hard for him, and he taught us much in a setting that valued creativity and new ideas above all.
THE LATER YEARS
By 1997, the face of heart failure had changed. New treatments were effective, but there were new challenges to face. I moved that year to the Cleveland Clinic, where I spent 11 enjoyable and productive years. I returned to Minnesota in 2008 to help build a new cardiovascular division.
It is gratifying to look back and see what has become of the “neurohormonal hypothesis.” Today, nearly all major medical centers have heart failure programs, and certification in advanced heart failure/heart transplantation is a reality. Training programs in advanced heart failure and heart transplant are common. The Heart Failure Society of America sprang up in the early 1990s, dedicated to patients with heart failure. Jay Cohn founded the Journal of Cardiac Failure, which flourished under his leadership. Neurohormonal blockers are now considered standard, conventional therapy and are widely used throughout the world.
CONCLUSIONS
Still, there is much work to do. An increasing number of devices are being developed, largely for patients with more advanced heart failure, but attention is also being directed to prevention of heart failure. Identification and possible treatment of patients at risk for the development of heart failure, and identification of those who already have some early structural and functional perturbation without advanced symptoms, are critically important. Since event rates are so low in these patients, we need to create new strategies for studying interventions. In the long term, the best treatment for nearly any condition is early diagnosis and perhaps early treatment with a goal of prevention.
One consequence of our progress over the years may be that heart failure now primarily affects a more elderly group—patients who often have many associated comorbidities. The consequences include more frequent readmissions, large numbers of patients with intractable signs and symptoms, and the emergence of difficult end-of-life decisions. If we could truly prevent heart failure rather than forestall its emergence to a later point in life, perhaps we could do more good.
For me, the study of neurohormonal mechanisms in the setting of heart failure was the centerpiece of my early career. Jay Cohn had asked several of us early in our laboratory experience to choose a neurohormonal system and learn about it in great depth and detail. My assignment was the SNS. Since then, I have never tired of learning about its control mechanisms, how it achieves circulatory homeostasis, how its excess quantities can be directly toxic to the heart, and the variety of pharmacologic ways that we can control it. I am indeed fortunate to have been part of this amazing study group.
- Dzau VJ, Colucci WS, Hollenberg NK, Williams GH. Relation of the renin-angiotensin-aldosterone system to clinical state in congestive heart failure. Circulation 1981; 63:645–651.
- Francis GS, Goldsmith SR, Levine TB, Olivari MT, Cohn JN. The neurohumoral axis in congestive heart failure. Ann Intern Med 1984; 101:370–377.
- Levine TB, Francis GS, Goldsmith SR, Simon AB, Cohn JN. Activity of the sympathetic nervous system and renin-angiotensin system assessed by plasma hormone levels and their relation to hemodynamic abnormalities in congestive heart failure. Am J Cardiol 1982; 49:1659–1666.
- Packer M. The neurohormonal hypothesis: a theory to explain the mechanism of disease progression in heart failure. J Am Coll Cardiol 1992; 20:248–254.
- The SOLVD Investigators. Effect of enalapril on survival in patients with reduced left ventricular ejection fraction and congestive heart failure. N Engl J Med 1991; 325:293–302.
- The SOLVD Investigators. Effect of enalapril on mortality and the development of heart failure in asymptomatic patients with reduced left ventricular ejection fractions. N Engl J Med 1992; 327:685–691.
- Pitt B, Zannand F, Remme WJ, et al The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med 1999; 341:709–717.
- ONTARGET Investigators. Telmisartan, ramipril, or both in patients at high risk for vascular events. N Engl J Med 2008; 358:1547–1559.
- CIBIS Investigators and Committees. A randomized trial of β-blockade in heart failure: the Cardiac Insufficiency Bisoprolol Study (CIBIS). Circulation 1994; 90:1765–1773.
- Hjalmarson A, Goldstein S, Fagerberg B, et al Effects of controlled-release metoprolol on total mortality, hospitalizations, and well-being in patients with heart failure: the Metoprolol CR/XL Randomized Intervention Trial in congestive heart failure (MERITHF). JAMA 2000; 283:1295–1302.
- Packer M, Fowler MB, Roecker EB, et al Effect of carvedilol on the morbidity of patients with severe chronic heart failure: results of the carvedilol prospective randomized cumulative survival (COPERNICUS) study. Circulation 2002; 106:2194–2199.
- Imperial ES, Levy MN, Zieske H. Outflow resistance as an independent determinant of cardiac performance. Circ Res 1961; 9:1148–1155.
- Sonnenblick EH, Downing SE. Afterload as a primary determinant of ventricular performance. Am J Physiol 1963; 204:604–610.
- Wilcken DE, Charlier AA, Hoffman JI. Effects of alterations in aortic impedance on the performance of the ventricles. Circ Res 1964; 14:283–293.
- Ross J, Braunwald E. The study of left ventricular function in man by increasing resistance to ventricular ejection with angiotensin. Circulation 1964; 29:739–749.
- Cohn JN. Blood pressure and cardiac performance. Am J Med 1973; 55:351–361.
- Cohn JN, Levine TB, Olivari MT, et al Plasma norepinephrine as a guide to prognosis in patients with chronic congestive heart failure. N Engl J Med 1984; 311:819–823.
- Anand IS, Tam SW, Rector TS, et al Influence of blood pressure on the effectiveness of a fixed-dose combination of isosorbide dinitrate and hydralazine in the African-American Heart Failure Trial. J Am Coll Cardiol 2007; 49:32–39.
- Rouleau JL, Roecker EB, Tendra M, et al Influence of pretreatment systolic blood pressure on the effect of carvedilol in patients with severe chronic heart failure: the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) study. J Am Coll Cardiol 2004; 43:1423–1429.
- Taylor AL, Ziesche S, Yancy C, et al Combination of isosorbide dinitrate and hydralazine in blacks with heart failure. N Engl J Med 2004; 351:2049–2057.
- Captopril Multicenter Research Group. A placebo-controlled trial of captopril in refractory chronic congestive heart failure. J Am Coll Cardiol 1983; 2:755–763.
- Pfeffer MA, Braunwald E, Moyé LA, et al Effect of captopril on mortality and morbidity in patients with left ventricular dysfunction after myocardial infarction: results of the survival and ventricular enlargement trial—the SAVE Investigators. N Eng J Med 1992; 327:669–677.
- Curtiss C, Cohn JN, Vrobel T, Franciosa J. Role of the renin-angiotensin system in the systemic vasoconstriction of chronic congestive heart failure. Circulation 1978; 58:763–770.
- Cohn JN, Tognoni G. A randomized trial of the angiotensin-receptor blocker valsartan in chronic heart failure. N Engl J Med 2001; 345:1667–1675.
- Young JB, Dunlap ME, Pfeffer MA, et al Mortality and morbidity reduction with Candesartan in patients with chronic heart failure and left ventricular systolic dysfunction: results of the CHARM low-left ventricular ejection trials. Circulation 2004; 110:2618–2626.
- Pfeffer MA, McMurray JJ, Velazquez EJ, et al Valsartan, captopril, or both in myocardial infarction complicated by heart failure, left ventricular dysfunction, or both. N Engl J Med 2003; 349:1893–1906.
- ONTARGET Investigators. Telmisartan, ramipril, or both in patients at high risk for vascular events. N Engl J Med 2008; 358:1547–1559.
- Konstam MA, Neaton JD, Dickstein K, et al Effects of high-dose versus lose-dose losartan on clinical outcomes in patients with heart failure (HEAAL study): a randomized, double-blind trial. Lancet 2009; 374:1840–1848.
- Packer M. Prospective randomized amlodipine survival evaluation 2. Presented at: 49th American College of Cardiology meeting; March 2000; Anaheim, CA.
- Pitt B, Zannand F, Remme WJ, et al The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med 1999; 341:709–717.
- Pitt B, Remme W, Zannand F, et al Eplerenone, a selective aldosterone blocker in patients with left ventricular dysfunction after myocardial infarction. N Engl J Med 2003; 348:1309–1321.
- Cohn JN. Structural basis for heart failure: ventricular remodeling and its pharmacological inhibition. Circulation 1995; 91:2504–2507.
- Cohn JN, Ferrari R, Sharpe N. Cardiac remodeling—concepts and clinical implications: a consensus paper from an international forum on cardiac remodeling. J Am Coll Cardiol 2000; 35:569–581.
- Konstam MA, Kronenberg MW, Rousseau MF, et al Effects of the angiotensin converting enzyme inhibitor enalapril on the long-term progression of left ventricular dilation in patients with asymptomatic systolic dysfunction. Circulation 1993; 88:2277–2283.
- Greenberg B, Quinones MA, Koilpillai C, et al Effects of long-term enalapril therapy on cardiac structure and function in patients with left ventricular dysfunction: results of the SOLVD echocardiography substudy. Circulation 1995; 91:2573–2581.
- Pfeffer JM, Pfeffer MA, Braunwald E. Influence of chronic captopril therapy on the infarcted left ventricle of the rat. Circ Res 1985; 57:84–95.
- Cohn JN. Structural basis for heart failure: ventricular remodeling and its pharmacological inhibition. Circulation 1995; 91:2504–2507.
- McDonald KM, Garr M, Carlyle PF, et al Relative effects of α1-adrenoceptor blockade, converting enzyme inhibitor therapy, and angiotensin II sub-type 1 receptor blockade on ventricular remodeling in the dog. Circulation 1994; 90:3034–3046.
- Pfeffer MA, Braunwald E. Ventricular remodeling after myocardial infarction. Experimental observations and clinical implications. Circulation 1990; 81:1161–1172.
- Cohn JN, Archibald DG, Ziesche S, et al Effect of vasodilator therapy on mortality in chronic congestive heart failure. N Engl J Med 1986; 314:1547–1552.
- Cohn JN, Johnson G, Ziesche S, et al A comparison of enalapril with hydralazine–isosorbide dinitrate in the treatment of chronic congestive heart failure. N Engl J Med 1991; 325:303–310.
- Dzau VJ, Colucci WS, Hollenberg NK, Williams GH. Relation of the renin-angiotensin-aldosterone system to clinical state in congestive heart failure. Circulation 1981; 63:645–651.
- Francis GS, Goldsmith SR, Levine TB, Olivari MT, Cohn JN. The neurohumoral axis in congestive heart failure. Ann Intern Med 1984; 101:370–377.
- Levine TB, Francis GS, Goldsmith SR, Simon AB, Cohn JN. Activity of the sympathetic nervous system and renin-angiotensin system assessed by plasma hormone levels and their relation to hemodynamic abnormalities in congestive heart failure. Am J Cardiol 1982; 49:1659–1666.
- Packer M. The neurohormonal hypothesis: a theory to explain the mechanism of disease progression in heart failure. J Am Coll Cardiol 1992; 20:248–254.
- The SOLVD Investigators. Effect of enalapril on survival in patients with reduced left ventricular ejection fraction and congestive heart failure. N Engl J Med 1991; 325:293–302.
- The SOLVD Investigators. Effect of enalapril on mortality and the development of heart failure in asymptomatic patients with reduced left ventricular ejection fractions. N Engl J Med 1992; 327:685–691.
- Pitt B, Zannand F, Remme WJ, et al The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med 1999; 341:709–717.
- ONTARGET Investigators. Telmisartan, ramipril, or both in patients at high risk for vascular events. N Engl J Med 2008; 358:1547–1559.
- CIBIS Investigators and Committees. A randomized trial of β-blockade in heart failure: the Cardiac Insufficiency Bisoprolol Study (CIBIS). Circulation 1994; 90:1765–1773.
- Hjalmarson A, Goldstein S, Fagerberg B, et al Effects of controlled-release metoprolol on total mortality, hospitalizations, and well-being in patients with heart failure: the Metoprolol CR/XL Randomized Intervention Trial in congestive heart failure (MERITHF). JAMA 2000; 283:1295–1302.
- Packer M, Fowler MB, Roecker EB, et al Effect of carvedilol on the morbidity of patients with severe chronic heart failure: results of the carvedilol prospective randomized cumulative survival (COPERNICUS) study. Circulation 2002; 106:2194–2199.
- Imperial ES, Levy MN, Zieske H. Outflow resistance as an independent determinant of cardiac performance. Circ Res 1961; 9:1148–1155.
- Sonnenblick EH, Downing SE. Afterload as a primary determinant of ventricular performance. Am J Physiol 1963; 204:604–610.
- Wilcken DE, Charlier AA, Hoffman JI. Effects of alterations in aortic impedance on the performance of the ventricles. Circ Res 1964; 14:283–293.
- Ross J, Braunwald E. The study of left ventricular function in man by increasing resistance to ventricular ejection with angiotensin. Circulation 1964; 29:739–749.
- Cohn JN. Blood pressure and cardiac performance. Am J Med 1973; 55:351–361.
- Cohn JN, Levine TB, Olivari MT, et al Plasma norepinephrine as a guide to prognosis in patients with chronic congestive heart failure. N Engl J Med 1984; 311:819–823.
- Anand IS, Tam SW, Rector TS, et al Influence of blood pressure on the effectiveness of a fixed-dose combination of isosorbide dinitrate and hydralazine in the African-American Heart Failure Trial. J Am Coll Cardiol 2007; 49:32–39.
- Rouleau JL, Roecker EB, Tendra M, et al Influence of pretreatment systolic blood pressure on the effect of carvedilol in patients with severe chronic heart failure: the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) study. J Am Coll Cardiol 2004; 43:1423–1429.
- Taylor AL, Ziesche S, Yancy C, et al Combination of isosorbide dinitrate and hydralazine in blacks with heart failure. N Engl J Med 2004; 351:2049–2057.
- Captopril Multicenter Research Group. A placebo-controlled trial of captopril in refractory chronic congestive heart failure. J Am Coll Cardiol 1983; 2:755–763.
- Pfeffer MA, Braunwald E, Moyé LA, et al Effect of captopril on mortality and morbidity in patients with left ventricular dysfunction after myocardial infarction: results of the survival and ventricular enlargement trial—the SAVE Investigators. N Eng J Med 1992; 327:669–677.
- Curtiss C, Cohn JN, Vrobel T, Franciosa J. Role of the renin-angiotensin system in the systemic vasoconstriction of chronic congestive heart failure. Circulation 1978; 58:763–770.
- Cohn JN, Tognoni G. A randomized trial of the angiotensin-receptor blocker valsartan in chronic heart failure. N Engl J Med 2001; 345:1667–1675.
- Young JB, Dunlap ME, Pfeffer MA, et al Mortality and morbidity reduction with Candesartan in patients with chronic heart failure and left ventricular systolic dysfunction: results of the CHARM low-left ventricular ejection trials. Circulation 2004; 110:2618–2626.
- Pfeffer MA, McMurray JJ, Velazquez EJ, et al Valsartan, captopril, or both in myocardial infarction complicated by heart failure, left ventricular dysfunction, or both. N Engl J Med 2003; 349:1893–1906.
- ONTARGET Investigators. Telmisartan, ramipril, or both in patients at high risk for vascular events. N Engl J Med 2008; 358:1547–1559.
- Konstam MA, Neaton JD, Dickstein K, et al Effects of high-dose versus lose-dose losartan on clinical outcomes in patients with heart failure (HEAAL study): a randomized, double-blind trial. Lancet 2009; 374:1840–1848.
- Packer M. Prospective randomized amlodipine survival evaluation 2. Presented at: 49th American College of Cardiology meeting; March 2000; Anaheim, CA.
- Pitt B, Zannand F, Remme WJ, et al The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med 1999; 341:709–717.
- Pitt B, Remme W, Zannand F, et al Eplerenone, a selective aldosterone blocker in patients with left ventricular dysfunction after myocardial infarction. N Engl J Med 2003; 348:1309–1321.
- Cohn JN. Structural basis for heart failure: ventricular remodeling and its pharmacological inhibition. Circulation 1995; 91:2504–2507.
- Cohn JN, Ferrari R, Sharpe N. Cardiac remodeling—concepts and clinical implications: a consensus paper from an international forum on cardiac remodeling. J Am Coll Cardiol 2000; 35:569–581.
- Konstam MA, Kronenberg MW, Rousseau MF, et al Effects of the angiotensin converting enzyme inhibitor enalapril on the long-term progression of left ventricular dilation in patients with asymptomatic systolic dysfunction. Circulation 1993; 88:2277–2283.
- Greenberg B, Quinones MA, Koilpillai C, et al Effects of long-term enalapril therapy on cardiac structure and function in patients with left ventricular dysfunction: results of the SOLVD echocardiography substudy. Circulation 1995; 91:2573–2581.
- Pfeffer JM, Pfeffer MA, Braunwald E. Influence of chronic captopril therapy on the infarcted left ventricle of the rat. Circ Res 1985; 57:84–95.
- Cohn JN. Structural basis for heart failure: ventricular remodeling and its pharmacological inhibition. Circulation 1995; 91:2504–2507.
- McDonald KM, Garr M, Carlyle PF, et al Relative effects of α1-adrenoceptor blockade, converting enzyme inhibitor therapy, and angiotensin II sub-type 1 receptor blockade on ventricular remodeling in the dog. Circulation 1994; 90:3034–3046.
- Pfeffer MA, Braunwald E. Ventricular remodeling after myocardial infarction. Experimental observations and clinical implications. Circulation 1990; 81:1161–1172.
- Cohn JN, Archibald DG, Ziesche S, et al Effect of vasodilator therapy on mortality in chronic congestive heart failure. N Engl J Med 1986; 314:1547–1552.
- Cohn JN, Johnson G, Ziesche S, et al A comparison of enalapril with hydralazine–isosorbide dinitrate in the treatment of chronic congestive heart failure. N Engl J Med 1991; 325:303–310.
Biofeedback in coronary artery disease, type 2 diabetes, and multiple sclerosis
Small renal masses: Toward more rational treatment
Opinion about treatment of mall renal masses has changed considerably in the past 2 decades.
Traditionally, the most common treatment was surgical removal of the whole kidney, ie, radical nephrectomy. However, recent studies have shown that many patients who undergo radical nephrectomy develop chronic kidney disease. Furthermore, radical nephrectomy often constitutes over-treatment, as many of these lesions are benign or, if malignant, would follow an indolent course if left alone.
Now that we better understand the biology of small renal masses and are more aware of the morbidity and mortality related to chronic kidney disease, we try to avoid radical nephrectomy whenever possible, favoring nephron-sparing approaches instead.
In this article, we review the current clinical management of small renal masses.
SMALL RENAL MASSES ARE A HETEROGENEOUS GROUP
Small renal masses are defined as solid renal tumors that enhance on computed tomography (CT) and magnetic resonance imaging (MRI) and are suspected of being renal cell carcinomas. They are generally low-stage and relatively small (< 4 cm in diameter) at presentation. Most are now discovered incidentally on CT or MRI done for various abdominal symptoms. From 20,000 to 30,000 new cases are diagnosed each year in the United States, and the rate is increasing by 3% to 4% per year as the use of CT and MRI increases.1,2
With more small renal masses being detected incidentally, renal cell carcinoma has been going through a stage and size migration—ie, more of these tumors are being discovered in clinical stage T1 (ie, confined to the kidney and measuring less than 7 cm) than in the past. Currently, clinical T1 renal tumors account for 48% to 66% of cases.3
This indicates that the disease is being detected and treated earlier in its course than in the past. However, cancer-specific deaths from renal cell carcinoma have not declined, suggesting that for many of these patients, our traditional practice of aggressive surgical management with radical nephrectomy may not be warranted.4
Small renal masses vary in biologic aggressiveness
Recent large surgical series indicate that up to 20% of small renal masses are benign, 55% to 60% are indolent renal cell carcinomas, and only 20% to 25% have potentially aggressive features, defined by high nuclear grade or locally invasive characteristics.5–7
A relatively strong predictor of the aggressiveness of renal tumors is their size, which directly correlates with the risk of malignant pathology. Of lesions smaller than 1.0 cm, 38% to 46% are benign, dramatically decreasing to 6.3% to 7.1% for lesions larger than 7.0 cm.5 Each 1.0-cm increase in tumor diameter correlates with a 16% increase in the risk of malignancy.8
Our knowledge of the natural history of small renal masses is limited, being based on small, retrospective series. In these studies, when small renal masses were followed over time, relatively few progressed (ie, metastasized), and there have been no documented reports of disease progression in the absence of demonstrable tumor growth, suggesting a predominance of nonaggressive phenotypes.9
In light of these observations, patients with small renal masses should be carefully evaluated to determine if they are candidates for active surveillance as opposed to more aggressive treatment, ie, surgery or thermal ablation.
CT AND MRI ARE THE PREFERRED DIAGNOSTIC STUDIES
In the past, most patients with renal tumors presented with gross hematuria, flank pain, or a palpable abdominal mass. These presentations are now uncommon, as most cases are asymptomatic and are diagnosed incidentally. In a series of 349 small renal masses, microhematuria was found in only 8 cases.10
Systemic manifestations or paraneoplastic syndromes such as hypercalcemia or hypertension are more common in patients with metastatic renal cell carcinoma than in those with localized tumors. It was because of these varied clinical presentations that renal cell carcinoma was previously known as the “internist’s tumor”; however, small renal masses are better termed the “radiologist’s tumor.”11
High-quality axial imaging with CT or MRI is preferred for evaluating renal cortical neoplasms. Enhancement on CT or MRI is the characteristic finding of a renal lesion that should be suspected of being renal cell carcinoma (Figure 1). Triple-phase CT is ideal, with images taken before contrast is given, immediately after contrast (the early vascular phase), and later (the delayed phase). Alternatively, MRI can be used in patients who are allergic to intravenous contrast or who have moderate renal dysfunction.
Renal tumors with enhancement of more than 15 Hounsfield units (HU) on CT imaging are considered suggestive of renal cell carcinoma, whereas those with less than 10 HU of enhancement are more likely to be benign. Enhancement in the range of 10 to 15 HU is considered equivocal.
Differential diagnosis
By far, most small renal masses are renal cell carcinomas. However, other possibilities include oncocytoma, atypical or fat-poor angiomyolipoma, metanephric adenoma, urothelial carcinoma, metastatic lesions, lymphoma, renal abscess or infarction, mixed epithelial or stromal tumor, pseudotumor, and vascular malformations.
With rare exceptions, dense fat within a renal mass reliably indicates benign angiomyolipoma, and all renal tumors should be reviewed carefully for this feature. Beyond this, no clinical or radiologic feature ensures that a small renal mass is benign.
Imaging’s inability to accurately classify these enhancing renal lesions has led to a renewed interest in renal mass sampling to aid in the evaluation of small renal masses.
RENAL MASS SAMPLING: SAFER, MORE ACCURATE THAN THOUGHT
Renal mass sampling (ie, biopsy) has traditionally had a restricted role in the management of small renal masses, limited specifically to patients with a clinical history suggesting renal lymphoma, carcinoma that had metastasized to the kidney, or primary renal abscess. However, this may be changing, with more interest in it as a way to subtype and stratify select patients with small renal masses, especially potential candidates for active surveillance.
Our thinking about renal mass sampling has changed substantially over the last 2 decades. Previously, it was not routinely performed, because of concern over high false-negative rates (commonly quoted as being as high as 18%) and its potential associated morbidity. A common perception was that a negative biopsy could not be trusted and, therefore, renal mass sampling would not ultimately change patient management. However, many of these false-negative results were actually “noninformative,” ie, cases in which the renal tumor could not be adequately sampled or the pathologist lacked a sufficient specimen to allow for a definitive diagnosis.
Recent evidence suggests that these concerns were exaggerated and that renal mass sampling is more accurate and safer than previously thought. A meta-analysis of studies done before 2001 found that the diagnostic accuracy of renal mass sampling averaged 82%, whereas contemporary series indicate that its accuracy in differentiating benign from malignant tumors is actually greater than 95%.12 In addition, false-negative rates are now consistently less than 1%.13
Furthermore, serious complications requiring clinical intervention or hospitalization occur in fewer than 1% of cases. Seeding of the needle tract with tumor cells, which was another concern, is also exceedingly rare for these small, well-circumscribed renal masses.12
Overall morbidity is low with renal mass sampling, which is routinely performed as an outpatient procedure using CT or ultrasono-graphic guidance and local anesthesia.
However, 10% of biopsy results are still noninformative. In this situation, biopsy can be repeated, or the mass can be surgically excised, or the patient can undergo conservative management if he or she is unfit or unwilling to undergo surgery.
The encouraging results with renal mass sampling have led to greater use of it at many centers in the evaluation and risk-stratification of patients with small renal masses. It may be especially useful in patients considering several treatment options.
For example, a 75-year-old patient with modest comorbidities and a 2.0-cm enhancing renal mass could be a candidate for partial nephrectomy, thermal ablation, or active surveillance, and a reasonable argument could be made for each of these options. Renal mass sampling in this instance could be instrumental in guiding this decision, as a tissue diagnosis of high-grade renal cell carcinoma would favor partial nephrectomy, whereas a diagnosis of “oncocytoma neoplasm” would support a more conservative approach.
Older, frail patients with significant comorbidities who are unlikely to be candidates for aggressive surgical management would not need renal mass sampling, as they will ultimately be managed with active surveillance or thermal ablation.
Recent studies have also indicated that molecular profiling through gene expression analysis or proteomic analysis can further improve the accuracy of renal mass sampling.14 This will likely be the holy grail for this field, allowing for truly rational management (Figure 2).
TREATMENT OPTIONS
Radical nephrectomy: Still the most common treatment
In the past, complete removal of the kidney was standard for nearly all renal masses suspected of being renal cell carcinomas. Partial nephrectomy was generally reserved for patients who had a solitary kidney, bilateral tumors, or preexisting chronic kidney disease.
Although the two procedures provide equivalent oncologic outcomes for clinical T1 lesions, Miller et al15 reported that, before 2001, only 20% of small renal masses in the United States were managed with partial nephrectomy. That percentage has increased modestly, but radical nephrectomy still predominates.
One explanation for why the radical procedure is done more frequently is that partial nephrectomy is more technically difficult, as it involves renal reconstruction. Furthermore, radical nephrectomy can almost always be performed via a minimally invasive approach, which is inherently appealing to patients and surgeons alike. Laparoscopic radical nephrectomy has been called “the great seductress” because of these considerations.16 However, total removal of the kidney comes at a great price—loss of renal function.
Over the last decade, various studies have highlighted the association between radical nephrectomy and the subsequent clinical onset of chronic kidney disease, and the potential correlations between chronic kidney disease and cardiovascular events and elevated mortality rates.17
In a landmark study, Huang et al18 evaluated the outcomes of 662 patients who had small renal masses, a “normal” serum creatinine concentration (≤ 124 μmol/L [1.4 mg/dL]), and a normal-appearing contralateral kidney who underwent radical or partial nephrectomy. Of these, 26% were found to have preexisting stage 3 chronic kidney disease (glomerular filtration rate < 60 mL/min/1.73 m2 as calculated using the Modification of Diet in Renal Disease equation). Additionally, 65% of patients treated with radical nephrectomy were found to have stage 3 chronic kidney disease after surgery vs 20% of patients managed with partial nephrectomy.
The misconception remains that the risk of chronic kidney disease after radical nephrectomy is insignificant, since the risk is low in renal transplant donors.19 However, renal transplant donors undergo stringent screening to ensure that their general health is good and that their renal function is robust, both of which are not true in many patients with small renal masses, particularly if they are elderly.
The overuse of radical nephrectomy is worrisome in light of the potential implications of chronic kidney disease, such as increased risk of morbid cardiovascular events and elevated mortality rates. Many experts believe that over-treatment of small renal masses may have contributed to the paradoxical increase in overall mortality rates observed with radical nephrectomy in some studies.4
Although radical nephrectomy remains an important treatment for locally advanced renal cell carcinoma, it should be performed for small renal masses only if nephron-sparing surgery is not feasible (Table 2).
Partial nephrectomy: The new gold standard for most patients
Over the last 5 years, greater emphasis has been placed on lessening the risk of chronic kidney disease in the management of all urologic conditions, including small renal masses.
The overuse of radical nephrectomy prompted the American Urological Association to commission a panel to provide guidelines for the management of clinical stage T1 renal masses.17 After an extensive review and rigorous meta-analysis, the panel concluded that partial nephrectomy is the gold standard for most patients (Table 1, Table 2).
Partial nephrectomy involves excision of the tumor with a small margin of normal tissue, preserving as much functional renal parenchyma as possible, followed by closure of the collecting system, suture ligation of any transected vessels, and reapproximation of the capsule. Tumor excision is usually performed during temporary occlusion of the renal vasculature, allowing for a bloodless field. Regional hypothermia (cold ischemia) can also be used to minimize ischemic injury.
Contemporary series have documented that partial and radical nephrectomy have comparable oncologic efficacy for patients with small renal masses.20,21 Local recurrence rates are only 1% to 2% with partial nephrectomy, and 5- and 10-year cancer-specific survival rates of 96% and 90% have been reported.22
Furthermore, some studies have shown that patients undergoing partial nephrectomy have higher overall survival rates than those managed with radical nephrectomy—perhaps in part due to greater preservation of renal function and a lower incidence of subsequent chronic kidney disease.23,24 At Cleveland Clinic, we are now studying the determinants of ultimate renal function after partial nephrectomy in an effort to minimize ischemic injury and optimize this technique.25
Complications. Partial nephrectomy does have a potential downside in that it carries a higher risk of urologic complications such as urine leak and postoperative hemorrhage, which is not surprising because it requires a reconstruction that must heal. In a recent meta-analysis, urologic complications occurred in 6.3% patients who underwent open partial nephrectomy and in 9.0% of patients who underwent laparoscopic partial nephrectomy.17 Fortunately, most complications associated with partial nephrectomy can be managed with conservative measures.
Postoperative bleeding occurs in about 1% to 2% of patients and is the most serious complication. However, it is typically managed with superselective embolization, which has a high success rate and facilitates renal preservation.
Urine leak occurs in about 3% to 5% of cases and almost always resolves with prolonged drainage, occasionally complemented with a ureteral stent to promote antegrade drainage.
A new refinement, robotic-assisted partial nephrectomy promises to reduce the morbidity of this procedure. This approach takes less time to learn than standard laparoscopic surgery and has expanded the indications for minimally invasive partial nephrectomy, although more-difficult cases are still better done through a traditional, open surgical approach.
Thermal ablation: Another minimally invasive option
Cryoablation and radiofrequency ablation (collectively called thermal ablation) have recently emerged as alternate minimally invasive treatments for small renal masses. They are appealing options for patients with small renal tumors (< 3.5 cm) who have significant comorbidities but still prefer a proactive approach. They can also be considered as salvage procedures in patients with local recurrence after partial nephrectomy or in select patients with multifocal disease.
Both procedures can be performed percutaneously or laparoscopically, offering the potential for rapid convalescence and reduced morbidity.26,27 A laparoscopic approach is necessary to mobilize the tumor from adjacent organs if they are juxtaposed, whereas a percutaneous approach is less invasive and is better suited for posterior renal masses.28 Renal mass sampling should be performed in these patients before treatment to define the histology and to guide surveillance and should be repeated postoperatively if there is suspicion of local recurrence based on imaging.
Cryoablation destroys tumor cells through rapid cycles of freezing to less than −20°C (−4°F) and thawing, which can be monitored in real time via thermocoupling (ie, a thermometer microprobe strategically placed outside the tumor to ensure that lethal temperatures are extended beyond the edge of the tumor) or via ultrasonography, or both. Treatment is continued until the “ice ball” extends about 1 cm beyond the edge of the tumor.
Initial series reported local tumor control rates in the range of 90% to 95%; however, follow-up was very limited.29 In a more robust single-institution experience,30 renal cryoablation demonstrated 5-year cancer-specific and recurrence-free survival rates of 93% and 83%, respectively, substantially lower than what would be expected with surgical excision in a similar patient population.
Another concern with cryoablation is that options are limited for surgical salvage if the initial treatment fails. Nguyen and Campbell31 reported that partial nephrectomy and minimally invasive surgery were often precluded in this situation because of the extensive fibrotic reaction caused by the prior treatment. If cryoablation fails, surgical salvage thus often requires open, radical surgery.
Radiofrequency ablation produces tumor coagulation via protein denaturation and disruption of cell membranes after heating tissues to temperatures above 50°C (122°F) for 4 to 6 minutes.32 One of its disadvantages is that one cannot monitor treatment progress in real time, as there is no identifiable change in tissue appearance analagous to the ice ball that is seen with cryoablation.
Although the outcomes of radiofrequency ablation are less robust than those of cryoablation, most studies suggest that local control is achieved in 80% to 90% of cases based on radiographic loss of enhancement after treatment.17,30,33 A recent meta-analysis comparing these treatments found that lesions treated with radiofrequency ablation had a significantly higher rate of local tumor progression than those treated with cryoablation (12.3% vs 4.7%, P < .0001).34 Both of these local recurrence rates are substantially higher than that seen after surgical excision, despite much shorter follow-up after thermal ablation.
Tempered enthusiasm. Because thermal ablation has been developed relatively recently, its long-term outcomes and treatment efficacy have not been well established, and current studies have confirmed higher local recurrence rates with thermal ablation than with surgical excision (Table 1). Furthermore, there are significant deficiencies in the literature about thermal ablation, including limited follow-up, lack of pathologic confirmation, and controversies regarding histologic or radiologic definitions of success (Table 2).
Although current enthusiasm for thermal ablation has been tempered by suboptimal results, further refinement in technique and acknowledgment of its limitations will help to define appropriate candidates for these treatments.
Active surveillance for select patients
In select patients with extensive medical comorbidities or short life expectancy, the risks associated with proactive management may outweigh the benefits, especially considering the indolent nature of many small renal masses. In such patients, active surveillance is reasonable.
A recent meta-analysis found that most small enhancing renal masses grew relatively slowly (median 0.28 cm/year) and posed a low risk of metastasis (1%–2%).17,22 Furthermore, almost all renal lesions that progressed to metastatic disease demonstrated rapid radiographic growth, suggesting that the radiographic growth of a renal mass under active surveillance may serve as an indicator for aggressive behavior.35
Unfortunately, the growth rates of small renal masses do not reliably predict malignancy, and one study reported that 83% of tumors without demonstrable growth were malignant.36
Studies of active surveillance to date have had several other important limitations. Many did not incorporate pathologic confirmation, so that about 20% of the tumors were actually benign, thus artificially reducing the risk of adverse outcomes.5,22,37 Furthermore, the follow-up has been short, with most studies including data for only 2 to 3 years, which is clearly inadequate for this type of malignancy.37,38 Finally, most series had significant selection bias towards small, homogenous masses. In general, small renal masses that appear to be more aggressive are treated and thus excluded from these surveillance populations (Table 2).
Another concern about active surveillance is the small but real risk of tumor progression to metastatic disease, rendering these patients incurable even with new, targeted molecular therapies. Additionally, some patients may lose their window of opportunity for nephron-sparing surgery if significant tumor growth occurs during observation, rendering partial nephrectomy unfeasible. Therefore, active surveillance is not advisable for young, otherwise healthy patients (Table 2).
In the future, advances in renal mass sampling with molecular profiling may help determine which renal lesions are less biologically aggressive and, thereby, help identify appropriate candidates for observation (Figure 2).
- Chow WH, Devesa SS. Contemporary epidemiology of renal cell cancer. Cancer J 2008; 14:288–301.
- Lane BR, Campbell SC. Management of small renal masses. AUA Update Series 2009; 28:313–324.
- Volpe A, Panzarella T, Rendon RA, Haider MA, Kondylis FI, Jewett MA. The natural history of incidentally detected small renal masses. Cancer 2004; 100:738–745.
- Hollingsworth JM, Miller DC, Daignault S, Hollenbeck BK. Rising incidence of small renal masses: a need to reassess treatment effect. J Natl Cancer Inst 2006; 98:1331–1334.
- Frank I, Blute ML, Cheville JC, Lohse CM, Weaver AL, Zincke H. Solid renal tumors: an analysis of pathological features related to tumor size. J Urol 2003; 170:2217–2220.
- Russo P. Should elective partial nephrectomy be performed for renal cell carcinoma >4 cm in size? Nat Clin Pract Urol 2008; 5:482–483.
- Thomas AA, Aron M, Hernandez AV, Lane BR, Gill IS. Laparoscopic partial nephrectomy in octogenarians. Urology 2009; 74:1042–1046.
- Thompson RH, Kurta JM, Kaag M, et al. Tumor size is associated with malignant potential in renal cell carcinoma cases. J Urol 2009; 181:2033–2036.
- Mues AC, Landman J. Small renal masses: current concepts regarding the natural history and reflections on the American Urological Association guidelines. Curr Opin Urol 2010; 20:105–110.
- Patard JJ, Bensalah K, Vincendeau S, Rioux-Leclerq N, Guillé F, Lobel B. [Correlation between the mode of presentation of renal tumors and patient survival]. Prog Urol 2003; 13:23–28.
- Rini BI, Campbell SC, Escudier B. Renal cell carcinoma. Lancet 2009; 373:1119–1132.
- Lane BR, Samplaski MK, Herts BR, Zhou M, Novick AC, Campbell SC. Renal mass biopsy—a renaissance? J Urol 2008; 179:20–27.
- Samplaski MK, Zhou M, Lane BR, Herts B, Campbell SC. Renal mass sampling: an enlightened perspective. Int J Urol 2011; 18:5–19.
- Tan MH, Rogers CG, Cooper JT, et al. Gene expression profiling of renal cell carcinoma. Clin Cancer Res 2004; 10:6315S–6321S.
- Miller DC, Hollingsworth JM, Hafez KS, Daignault S, Hollenbeck BK. Partial nephrectomy for small renal masses: an emerging quality of care concern? J Urol 2006; 175:853–857.
- Lane BR, Poggio ED, Herts BR, Novick AC, Campbell SC. Renal function assessment in the era of chronic kidney disease: renewed emphasis on renal function centered patient care. J Urol 2009; 182:435–444.
- Campbell SC, Novick AC, Belldegrun A, et al; Practice Guidelines Committee of the American Urological Association. Guideline for management of the clinical T1 renal mass. J Urol 2009; 182:1271–1279.
- Huang WC, Levey AS, Serio AM, et al. Chronic kidney disease after nephrectomy in patients with renal cortical tumours: a retrospective cohort study. Lancet Oncol 2006; 7:735–740.
- Boorjian SA, Uzzo RG. The evolving management of small renal masses. Curr Oncol Rep 2009; 11:211–217.
- Hafez KS, Fergany AF, Novick AC. Nephron sparing surgery for localized renal cell carcinoma: impact of tumor size on patient survival, tumor recurrence and TNM staging. J Urol 1999; 162:1930–1933.
- Lee CT, Katz J, Shi W, Thaler HT, Reuter VE, Russo P. Surgical management of renal tumors 4 cm. or less in a contemporary cohort. J Urol 2000; 163:730–736.
- Chawla SN, Crispen PL, Hanlon AL, Greenberg RE, Chen DY, Uzzo RG. The natural history of observed enhancing renal masses: meta-analysis and review of the world literature. J Urol 2006; 175:425–431.
- Huang WC, Elkin EB, Levey AS, Jang TL, Russo P. Partial nephrectomy versus radical nephrectomy in patients with small renal tumors—is there a difference in mortality and cardiovascular outcomes? J Urol 2009; 181:55–61.
- Thompson RH, Boorjian SA, Lohse CM, et al. Radical nephrectomy for pT1a renal masses may be associated with decreased overall survival compared with partial nephrectomy. J Urol 2008; 179:468–471.
- Thomas AA, Demirjian S, Lane BR, et al. Acute kidney injury: novel biomarkers and potential utility for patient care in urology. Urology 2011; 77:5–11.
- Hinshaw JL, Shadid AM, Nakada SY, Hedican SP, Winter TC, Lee FT. Comparison of percutaneous and laparoscopic cryoablation for the treatment of solid renal masses. AJR Am J Roentgenol 2008; 191:1159–1168.
- Sterrett SP, Nakada SY, Wingo MS, Williams SK, Leveillee RJ. Renal thermal ablative therapy. Urol Clin North Am 2008; 35:397–414.
- Hafron J, Kaouk JH. Ablative techniques for the management of kidney cancer. Nat Clin Pract Urol 2007; 4:261–269.
- Matin SF, Ahrar K. Nephron-sparing probe ablative therapy: longterm outcomes. Curr Opin Urol 2008; 18:150–156.
- Berger A, Kamoi K, Gill IS, Aron M. Cryoablation for renal tumors: current status. Curr Opin Urol 2009; 19:138–142.
- Nguyen CT, Campbell SC. Salvage of local recurrence after primary thermal ablation for small renal masses. Expert Rev Anticancer Ther 2008; 8:1899–1905.
- Goldberg SN, Gazelle GS, Mueller PR. Thermal ablation therapy for focal malignancy: a unified approach to underlying principles, techniques, and diagnostic imaging guidance. AJR Am J Roentgenol 2000; 174:323–331.
- Carraway WA, Raman JD, Cadeddu JA. Current status of renal radiofrequency ablation. Curr Opin Urol 2009; 19:143–147.
- Kunkle DA, Uzzo RG. Cryoablation or radiofrequency ablation of the small renal mass: a meta-analysis. Cancer 2008; 113:2671–2680.
- Kunkle DA, Kutikov A, Uzzo RG. Management of small renal masses. Semin Ultrasound CT MR 2009; 30:352–358.
- Kunkle DA, Crispen PL, Chen DY, Greenberg RE, Uzzo RG. Enhancing renal masses with zero net growth during active surveillance. J Urol 2007; 177:849–853.
- Kunkle DA, Egleston BL, Uzzo RG. Excise, ablate or observe: the small renal mass dilemma—a meta-analysis and review. J Urol 2008; 179:1227–1233.
- Jewett MA, Zuniga A. Renal tumor natural history: the rationale and role for active surveillance. Urol Clin North Am 2008; 35:627–634.
Opinion about treatment of mall renal masses has changed considerably in the past 2 decades.
Traditionally, the most common treatment was surgical removal of the whole kidney, ie, radical nephrectomy. However, recent studies have shown that many patients who undergo radical nephrectomy develop chronic kidney disease. Furthermore, radical nephrectomy often constitutes over-treatment, as many of these lesions are benign or, if malignant, would follow an indolent course if left alone.
Now that we better understand the biology of small renal masses and are more aware of the morbidity and mortality related to chronic kidney disease, we try to avoid radical nephrectomy whenever possible, favoring nephron-sparing approaches instead.
In this article, we review the current clinical management of small renal masses.
SMALL RENAL MASSES ARE A HETEROGENEOUS GROUP
Small renal masses are defined as solid renal tumors that enhance on computed tomography (CT) and magnetic resonance imaging (MRI) and are suspected of being renal cell carcinomas. They are generally low-stage and relatively small (< 4 cm in diameter) at presentation. Most are now discovered incidentally on CT or MRI done for various abdominal symptoms. From 20,000 to 30,000 new cases are diagnosed each year in the United States, and the rate is increasing by 3% to 4% per year as the use of CT and MRI increases.1,2
With more small renal masses being detected incidentally, renal cell carcinoma has been going through a stage and size migration—ie, more of these tumors are being discovered in clinical stage T1 (ie, confined to the kidney and measuring less than 7 cm) than in the past. Currently, clinical T1 renal tumors account for 48% to 66% of cases.3
This indicates that the disease is being detected and treated earlier in its course than in the past. However, cancer-specific deaths from renal cell carcinoma have not declined, suggesting that for many of these patients, our traditional practice of aggressive surgical management with radical nephrectomy may not be warranted.4
Small renal masses vary in biologic aggressiveness
Recent large surgical series indicate that up to 20% of small renal masses are benign, 55% to 60% are indolent renal cell carcinomas, and only 20% to 25% have potentially aggressive features, defined by high nuclear grade or locally invasive characteristics.5–7
A relatively strong predictor of the aggressiveness of renal tumors is their size, which directly correlates with the risk of malignant pathology. Of lesions smaller than 1.0 cm, 38% to 46% are benign, dramatically decreasing to 6.3% to 7.1% for lesions larger than 7.0 cm.5 Each 1.0-cm increase in tumor diameter correlates with a 16% increase in the risk of malignancy.8
Our knowledge of the natural history of small renal masses is limited, being based on small, retrospective series. In these studies, when small renal masses were followed over time, relatively few progressed (ie, metastasized), and there have been no documented reports of disease progression in the absence of demonstrable tumor growth, suggesting a predominance of nonaggressive phenotypes.9
In light of these observations, patients with small renal masses should be carefully evaluated to determine if they are candidates for active surveillance as opposed to more aggressive treatment, ie, surgery or thermal ablation.
CT AND MRI ARE THE PREFERRED DIAGNOSTIC STUDIES
In the past, most patients with renal tumors presented with gross hematuria, flank pain, or a palpable abdominal mass. These presentations are now uncommon, as most cases are asymptomatic and are diagnosed incidentally. In a series of 349 small renal masses, microhematuria was found in only 8 cases.10
Systemic manifestations or paraneoplastic syndromes such as hypercalcemia or hypertension are more common in patients with metastatic renal cell carcinoma than in those with localized tumors. It was because of these varied clinical presentations that renal cell carcinoma was previously known as the “internist’s tumor”; however, small renal masses are better termed the “radiologist’s tumor.”11
High-quality axial imaging with CT or MRI is preferred for evaluating renal cortical neoplasms. Enhancement on CT or MRI is the characteristic finding of a renal lesion that should be suspected of being renal cell carcinoma (Figure 1). Triple-phase CT is ideal, with images taken before contrast is given, immediately after contrast (the early vascular phase), and later (the delayed phase). Alternatively, MRI can be used in patients who are allergic to intravenous contrast or who have moderate renal dysfunction.
Renal tumors with enhancement of more than 15 Hounsfield units (HU) on CT imaging are considered suggestive of renal cell carcinoma, whereas those with less than 10 HU of enhancement are more likely to be benign. Enhancement in the range of 10 to 15 HU is considered equivocal.
Differential diagnosis
By far, most small renal masses are renal cell carcinomas. However, other possibilities include oncocytoma, atypical or fat-poor angiomyolipoma, metanephric adenoma, urothelial carcinoma, metastatic lesions, lymphoma, renal abscess or infarction, mixed epithelial or stromal tumor, pseudotumor, and vascular malformations.
With rare exceptions, dense fat within a renal mass reliably indicates benign angiomyolipoma, and all renal tumors should be reviewed carefully for this feature. Beyond this, no clinical or radiologic feature ensures that a small renal mass is benign.
Imaging’s inability to accurately classify these enhancing renal lesions has led to a renewed interest in renal mass sampling to aid in the evaluation of small renal masses.
RENAL MASS SAMPLING: SAFER, MORE ACCURATE THAN THOUGHT
Renal mass sampling (ie, biopsy) has traditionally had a restricted role in the management of small renal masses, limited specifically to patients with a clinical history suggesting renal lymphoma, carcinoma that had metastasized to the kidney, or primary renal abscess. However, this may be changing, with more interest in it as a way to subtype and stratify select patients with small renal masses, especially potential candidates for active surveillance.
Our thinking about renal mass sampling has changed substantially over the last 2 decades. Previously, it was not routinely performed, because of concern over high false-negative rates (commonly quoted as being as high as 18%) and its potential associated morbidity. A common perception was that a negative biopsy could not be trusted and, therefore, renal mass sampling would not ultimately change patient management. However, many of these false-negative results were actually “noninformative,” ie, cases in which the renal tumor could not be adequately sampled or the pathologist lacked a sufficient specimen to allow for a definitive diagnosis.
Recent evidence suggests that these concerns were exaggerated and that renal mass sampling is more accurate and safer than previously thought. A meta-analysis of studies done before 2001 found that the diagnostic accuracy of renal mass sampling averaged 82%, whereas contemporary series indicate that its accuracy in differentiating benign from malignant tumors is actually greater than 95%.12 In addition, false-negative rates are now consistently less than 1%.13
Furthermore, serious complications requiring clinical intervention or hospitalization occur in fewer than 1% of cases. Seeding of the needle tract with tumor cells, which was another concern, is also exceedingly rare for these small, well-circumscribed renal masses.12
Overall morbidity is low with renal mass sampling, which is routinely performed as an outpatient procedure using CT or ultrasono-graphic guidance and local anesthesia.
However, 10% of biopsy results are still noninformative. In this situation, biopsy can be repeated, or the mass can be surgically excised, or the patient can undergo conservative management if he or she is unfit or unwilling to undergo surgery.
The encouraging results with renal mass sampling have led to greater use of it at many centers in the evaluation and risk-stratification of patients with small renal masses. It may be especially useful in patients considering several treatment options.
For example, a 75-year-old patient with modest comorbidities and a 2.0-cm enhancing renal mass could be a candidate for partial nephrectomy, thermal ablation, or active surveillance, and a reasonable argument could be made for each of these options. Renal mass sampling in this instance could be instrumental in guiding this decision, as a tissue diagnosis of high-grade renal cell carcinoma would favor partial nephrectomy, whereas a diagnosis of “oncocytoma neoplasm” would support a more conservative approach.
Older, frail patients with significant comorbidities who are unlikely to be candidates for aggressive surgical management would not need renal mass sampling, as they will ultimately be managed with active surveillance or thermal ablation.
Recent studies have also indicated that molecular profiling through gene expression analysis or proteomic analysis can further improve the accuracy of renal mass sampling.14 This will likely be the holy grail for this field, allowing for truly rational management (Figure 2).
TREATMENT OPTIONS
Radical nephrectomy: Still the most common treatment
In the past, complete removal of the kidney was standard for nearly all renal masses suspected of being renal cell carcinomas. Partial nephrectomy was generally reserved for patients who had a solitary kidney, bilateral tumors, or preexisting chronic kidney disease.
Although the two procedures provide equivalent oncologic outcomes for clinical T1 lesions, Miller et al15 reported that, before 2001, only 20% of small renal masses in the United States were managed with partial nephrectomy. That percentage has increased modestly, but radical nephrectomy still predominates.
One explanation for why the radical procedure is done more frequently is that partial nephrectomy is more technically difficult, as it involves renal reconstruction. Furthermore, radical nephrectomy can almost always be performed via a minimally invasive approach, which is inherently appealing to patients and surgeons alike. Laparoscopic radical nephrectomy has been called “the great seductress” because of these considerations.16 However, total removal of the kidney comes at a great price—loss of renal function.
Over the last decade, various studies have highlighted the association between radical nephrectomy and the subsequent clinical onset of chronic kidney disease, and the potential correlations between chronic kidney disease and cardiovascular events and elevated mortality rates.17
In a landmark study, Huang et al18 evaluated the outcomes of 662 patients who had small renal masses, a “normal” serum creatinine concentration (≤ 124 μmol/L [1.4 mg/dL]), and a normal-appearing contralateral kidney who underwent radical or partial nephrectomy. Of these, 26% were found to have preexisting stage 3 chronic kidney disease (glomerular filtration rate < 60 mL/min/1.73 m2 as calculated using the Modification of Diet in Renal Disease equation). Additionally, 65% of patients treated with radical nephrectomy were found to have stage 3 chronic kidney disease after surgery vs 20% of patients managed with partial nephrectomy.
The misconception remains that the risk of chronic kidney disease after radical nephrectomy is insignificant, since the risk is low in renal transplant donors.19 However, renal transplant donors undergo stringent screening to ensure that their general health is good and that their renal function is robust, both of which are not true in many patients with small renal masses, particularly if they are elderly.
The overuse of radical nephrectomy is worrisome in light of the potential implications of chronic kidney disease, such as increased risk of morbid cardiovascular events and elevated mortality rates. Many experts believe that over-treatment of small renal masses may have contributed to the paradoxical increase in overall mortality rates observed with radical nephrectomy in some studies.4
Although radical nephrectomy remains an important treatment for locally advanced renal cell carcinoma, it should be performed for small renal masses only if nephron-sparing surgery is not feasible (Table 2).
Partial nephrectomy: The new gold standard for most patients
Over the last 5 years, greater emphasis has been placed on lessening the risk of chronic kidney disease in the management of all urologic conditions, including small renal masses.
The overuse of radical nephrectomy prompted the American Urological Association to commission a panel to provide guidelines for the management of clinical stage T1 renal masses.17 After an extensive review and rigorous meta-analysis, the panel concluded that partial nephrectomy is the gold standard for most patients (Table 1, Table 2).
Partial nephrectomy involves excision of the tumor with a small margin of normal tissue, preserving as much functional renal parenchyma as possible, followed by closure of the collecting system, suture ligation of any transected vessels, and reapproximation of the capsule. Tumor excision is usually performed during temporary occlusion of the renal vasculature, allowing for a bloodless field. Regional hypothermia (cold ischemia) can also be used to minimize ischemic injury.
Contemporary series have documented that partial and radical nephrectomy have comparable oncologic efficacy for patients with small renal masses.20,21 Local recurrence rates are only 1% to 2% with partial nephrectomy, and 5- and 10-year cancer-specific survival rates of 96% and 90% have been reported.22
Furthermore, some studies have shown that patients undergoing partial nephrectomy have higher overall survival rates than those managed with radical nephrectomy—perhaps in part due to greater preservation of renal function and a lower incidence of subsequent chronic kidney disease.23,24 At Cleveland Clinic, we are now studying the determinants of ultimate renal function after partial nephrectomy in an effort to minimize ischemic injury and optimize this technique.25
Complications. Partial nephrectomy does have a potential downside in that it carries a higher risk of urologic complications such as urine leak and postoperative hemorrhage, which is not surprising because it requires a reconstruction that must heal. In a recent meta-analysis, urologic complications occurred in 6.3% patients who underwent open partial nephrectomy and in 9.0% of patients who underwent laparoscopic partial nephrectomy.17 Fortunately, most complications associated with partial nephrectomy can be managed with conservative measures.
Postoperative bleeding occurs in about 1% to 2% of patients and is the most serious complication. However, it is typically managed with superselective embolization, which has a high success rate and facilitates renal preservation.
Urine leak occurs in about 3% to 5% of cases and almost always resolves with prolonged drainage, occasionally complemented with a ureteral stent to promote antegrade drainage.
A new refinement, robotic-assisted partial nephrectomy promises to reduce the morbidity of this procedure. This approach takes less time to learn than standard laparoscopic surgery and has expanded the indications for minimally invasive partial nephrectomy, although more-difficult cases are still better done through a traditional, open surgical approach.
Thermal ablation: Another minimally invasive option
Cryoablation and radiofrequency ablation (collectively called thermal ablation) have recently emerged as alternate minimally invasive treatments for small renal masses. They are appealing options for patients with small renal tumors (< 3.5 cm) who have significant comorbidities but still prefer a proactive approach. They can also be considered as salvage procedures in patients with local recurrence after partial nephrectomy or in select patients with multifocal disease.
Both procedures can be performed percutaneously or laparoscopically, offering the potential for rapid convalescence and reduced morbidity.26,27 A laparoscopic approach is necessary to mobilize the tumor from adjacent organs if they are juxtaposed, whereas a percutaneous approach is less invasive and is better suited for posterior renal masses.28 Renal mass sampling should be performed in these patients before treatment to define the histology and to guide surveillance and should be repeated postoperatively if there is suspicion of local recurrence based on imaging.
Cryoablation destroys tumor cells through rapid cycles of freezing to less than −20°C (−4°F) and thawing, which can be monitored in real time via thermocoupling (ie, a thermometer microprobe strategically placed outside the tumor to ensure that lethal temperatures are extended beyond the edge of the tumor) or via ultrasonography, or both. Treatment is continued until the “ice ball” extends about 1 cm beyond the edge of the tumor.
Initial series reported local tumor control rates in the range of 90% to 95%; however, follow-up was very limited.29 In a more robust single-institution experience,30 renal cryoablation demonstrated 5-year cancer-specific and recurrence-free survival rates of 93% and 83%, respectively, substantially lower than what would be expected with surgical excision in a similar patient population.
Another concern with cryoablation is that options are limited for surgical salvage if the initial treatment fails. Nguyen and Campbell31 reported that partial nephrectomy and minimally invasive surgery were often precluded in this situation because of the extensive fibrotic reaction caused by the prior treatment. If cryoablation fails, surgical salvage thus often requires open, radical surgery.
Radiofrequency ablation produces tumor coagulation via protein denaturation and disruption of cell membranes after heating tissues to temperatures above 50°C (122°F) for 4 to 6 minutes.32 One of its disadvantages is that one cannot monitor treatment progress in real time, as there is no identifiable change in tissue appearance analagous to the ice ball that is seen with cryoablation.
Although the outcomes of radiofrequency ablation are less robust than those of cryoablation, most studies suggest that local control is achieved in 80% to 90% of cases based on radiographic loss of enhancement after treatment.17,30,33 A recent meta-analysis comparing these treatments found that lesions treated with radiofrequency ablation had a significantly higher rate of local tumor progression than those treated with cryoablation (12.3% vs 4.7%, P < .0001).34 Both of these local recurrence rates are substantially higher than that seen after surgical excision, despite much shorter follow-up after thermal ablation.
Tempered enthusiasm. Because thermal ablation has been developed relatively recently, its long-term outcomes and treatment efficacy have not been well established, and current studies have confirmed higher local recurrence rates with thermal ablation than with surgical excision (Table 1). Furthermore, there are significant deficiencies in the literature about thermal ablation, including limited follow-up, lack of pathologic confirmation, and controversies regarding histologic or radiologic definitions of success (Table 2).
Although current enthusiasm for thermal ablation has been tempered by suboptimal results, further refinement in technique and acknowledgment of its limitations will help to define appropriate candidates for these treatments.
Active surveillance for select patients
In select patients with extensive medical comorbidities or short life expectancy, the risks associated with proactive management may outweigh the benefits, especially considering the indolent nature of many small renal masses. In such patients, active surveillance is reasonable.
A recent meta-analysis found that most small enhancing renal masses grew relatively slowly (median 0.28 cm/year) and posed a low risk of metastasis (1%–2%).17,22 Furthermore, almost all renal lesions that progressed to metastatic disease demonstrated rapid radiographic growth, suggesting that the radiographic growth of a renal mass under active surveillance may serve as an indicator for aggressive behavior.35
Unfortunately, the growth rates of small renal masses do not reliably predict malignancy, and one study reported that 83% of tumors without demonstrable growth were malignant.36
Studies of active surveillance to date have had several other important limitations. Many did not incorporate pathologic confirmation, so that about 20% of the tumors were actually benign, thus artificially reducing the risk of adverse outcomes.5,22,37 Furthermore, the follow-up has been short, with most studies including data for only 2 to 3 years, which is clearly inadequate for this type of malignancy.37,38 Finally, most series had significant selection bias towards small, homogenous masses. In general, small renal masses that appear to be more aggressive are treated and thus excluded from these surveillance populations (Table 2).
Another concern about active surveillance is the small but real risk of tumor progression to metastatic disease, rendering these patients incurable even with new, targeted molecular therapies. Additionally, some patients may lose their window of opportunity for nephron-sparing surgery if significant tumor growth occurs during observation, rendering partial nephrectomy unfeasible. Therefore, active surveillance is not advisable for young, otherwise healthy patients (Table 2).
In the future, advances in renal mass sampling with molecular profiling may help determine which renal lesions are less biologically aggressive and, thereby, help identify appropriate candidates for observation (Figure 2).
Opinion about treatment of mall renal masses has changed considerably in the past 2 decades.
Traditionally, the most common treatment was surgical removal of the whole kidney, ie, radical nephrectomy. However, recent studies have shown that many patients who undergo radical nephrectomy develop chronic kidney disease. Furthermore, radical nephrectomy often constitutes over-treatment, as many of these lesions are benign or, if malignant, would follow an indolent course if left alone.
Now that we better understand the biology of small renal masses and are more aware of the morbidity and mortality related to chronic kidney disease, we try to avoid radical nephrectomy whenever possible, favoring nephron-sparing approaches instead.
In this article, we review the current clinical management of small renal masses.
SMALL RENAL MASSES ARE A HETEROGENEOUS GROUP
Small renal masses are defined as solid renal tumors that enhance on computed tomography (CT) and magnetic resonance imaging (MRI) and are suspected of being renal cell carcinomas. They are generally low-stage and relatively small (< 4 cm in diameter) at presentation. Most are now discovered incidentally on CT or MRI done for various abdominal symptoms. From 20,000 to 30,000 new cases are diagnosed each year in the United States, and the rate is increasing by 3% to 4% per year as the use of CT and MRI increases.1,2
With more small renal masses being detected incidentally, renal cell carcinoma has been going through a stage and size migration—ie, more of these tumors are being discovered in clinical stage T1 (ie, confined to the kidney and measuring less than 7 cm) than in the past. Currently, clinical T1 renal tumors account for 48% to 66% of cases.3
This indicates that the disease is being detected and treated earlier in its course than in the past. However, cancer-specific deaths from renal cell carcinoma have not declined, suggesting that for many of these patients, our traditional practice of aggressive surgical management with radical nephrectomy may not be warranted.4
Small renal masses vary in biologic aggressiveness
Recent large surgical series indicate that up to 20% of small renal masses are benign, 55% to 60% are indolent renal cell carcinomas, and only 20% to 25% have potentially aggressive features, defined by high nuclear grade or locally invasive characteristics.5–7
A relatively strong predictor of the aggressiveness of renal tumors is their size, which directly correlates with the risk of malignant pathology. Of lesions smaller than 1.0 cm, 38% to 46% are benign, dramatically decreasing to 6.3% to 7.1% for lesions larger than 7.0 cm.5 Each 1.0-cm increase in tumor diameter correlates with a 16% increase in the risk of malignancy.8
Our knowledge of the natural history of small renal masses is limited, being based on small, retrospective series. In these studies, when small renal masses were followed over time, relatively few progressed (ie, metastasized), and there have been no documented reports of disease progression in the absence of demonstrable tumor growth, suggesting a predominance of nonaggressive phenotypes.9
In light of these observations, patients with small renal masses should be carefully evaluated to determine if they are candidates for active surveillance as opposed to more aggressive treatment, ie, surgery or thermal ablation.
CT AND MRI ARE THE PREFERRED DIAGNOSTIC STUDIES
In the past, most patients with renal tumors presented with gross hematuria, flank pain, or a palpable abdominal mass. These presentations are now uncommon, as most cases are asymptomatic and are diagnosed incidentally. In a series of 349 small renal masses, microhematuria was found in only 8 cases.10
Systemic manifestations or paraneoplastic syndromes such as hypercalcemia or hypertension are more common in patients with metastatic renal cell carcinoma than in those with localized tumors. It was because of these varied clinical presentations that renal cell carcinoma was previously known as the “internist’s tumor”; however, small renal masses are better termed the “radiologist’s tumor.”11
High-quality axial imaging with CT or MRI is preferred for evaluating renal cortical neoplasms. Enhancement on CT or MRI is the characteristic finding of a renal lesion that should be suspected of being renal cell carcinoma (Figure 1). Triple-phase CT is ideal, with images taken before contrast is given, immediately after contrast (the early vascular phase), and later (the delayed phase). Alternatively, MRI can be used in patients who are allergic to intravenous contrast or who have moderate renal dysfunction.
Renal tumors with enhancement of more than 15 Hounsfield units (HU) on CT imaging are considered suggestive of renal cell carcinoma, whereas those with less than 10 HU of enhancement are more likely to be benign. Enhancement in the range of 10 to 15 HU is considered equivocal.
Differential diagnosis
By far, most small renal masses are renal cell carcinomas. However, other possibilities include oncocytoma, atypical or fat-poor angiomyolipoma, metanephric adenoma, urothelial carcinoma, metastatic lesions, lymphoma, renal abscess or infarction, mixed epithelial or stromal tumor, pseudotumor, and vascular malformations.
With rare exceptions, dense fat within a renal mass reliably indicates benign angiomyolipoma, and all renal tumors should be reviewed carefully for this feature. Beyond this, no clinical or radiologic feature ensures that a small renal mass is benign.
Imaging’s inability to accurately classify these enhancing renal lesions has led to a renewed interest in renal mass sampling to aid in the evaluation of small renal masses.
RENAL MASS SAMPLING: SAFER, MORE ACCURATE THAN THOUGHT
Renal mass sampling (ie, biopsy) has traditionally had a restricted role in the management of small renal masses, limited specifically to patients with a clinical history suggesting renal lymphoma, carcinoma that had metastasized to the kidney, or primary renal abscess. However, this may be changing, with more interest in it as a way to subtype and stratify select patients with small renal masses, especially potential candidates for active surveillance.
Our thinking about renal mass sampling has changed substantially over the last 2 decades. Previously, it was not routinely performed, because of concern over high false-negative rates (commonly quoted as being as high as 18%) and its potential associated morbidity. A common perception was that a negative biopsy could not be trusted and, therefore, renal mass sampling would not ultimately change patient management. However, many of these false-negative results were actually “noninformative,” ie, cases in which the renal tumor could not be adequately sampled or the pathologist lacked a sufficient specimen to allow for a definitive diagnosis.
Recent evidence suggests that these concerns were exaggerated and that renal mass sampling is more accurate and safer than previously thought. A meta-analysis of studies done before 2001 found that the diagnostic accuracy of renal mass sampling averaged 82%, whereas contemporary series indicate that its accuracy in differentiating benign from malignant tumors is actually greater than 95%.12 In addition, false-negative rates are now consistently less than 1%.13
Furthermore, serious complications requiring clinical intervention or hospitalization occur in fewer than 1% of cases. Seeding of the needle tract with tumor cells, which was another concern, is also exceedingly rare for these small, well-circumscribed renal masses.12
Overall morbidity is low with renal mass sampling, which is routinely performed as an outpatient procedure using CT or ultrasono-graphic guidance and local anesthesia.
However, 10% of biopsy results are still noninformative. In this situation, biopsy can be repeated, or the mass can be surgically excised, or the patient can undergo conservative management if he or she is unfit or unwilling to undergo surgery.
The encouraging results with renal mass sampling have led to greater use of it at many centers in the evaluation and risk-stratification of patients with small renal masses. It may be especially useful in patients considering several treatment options.
For example, a 75-year-old patient with modest comorbidities and a 2.0-cm enhancing renal mass could be a candidate for partial nephrectomy, thermal ablation, or active surveillance, and a reasonable argument could be made for each of these options. Renal mass sampling in this instance could be instrumental in guiding this decision, as a tissue diagnosis of high-grade renal cell carcinoma would favor partial nephrectomy, whereas a diagnosis of “oncocytoma neoplasm” would support a more conservative approach.
Older, frail patients with significant comorbidities who are unlikely to be candidates for aggressive surgical management would not need renal mass sampling, as they will ultimately be managed with active surveillance or thermal ablation.
Recent studies have also indicated that molecular profiling through gene expression analysis or proteomic analysis can further improve the accuracy of renal mass sampling.14 This will likely be the holy grail for this field, allowing for truly rational management (Figure 2).
TREATMENT OPTIONS
Radical nephrectomy: Still the most common treatment
In the past, complete removal of the kidney was standard for nearly all renal masses suspected of being renal cell carcinomas. Partial nephrectomy was generally reserved for patients who had a solitary kidney, bilateral tumors, or preexisting chronic kidney disease.
Although the two procedures provide equivalent oncologic outcomes for clinical T1 lesions, Miller et al15 reported that, before 2001, only 20% of small renal masses in the United States were managed with partial nephrectomy. That percentage has increased modestly, but radical nephrectomy still predominates.
One explanation for why the radical procedure is done more frequently is that partial nephrectomy is more technically difficult, as it involves renal reconstruction. Furthermore, radical nephrectomy can almost always be performed via a minimally invasive approach, which is inherently appealing to patients and surgeons alike. Laparoscopic radical nephrectomy has been called “the great seductress” because of these considerations.16 However, total removal of the kidney comes at a great price—loss of renal function.
Over the last decade, various studies have highlighted the association between radical nephrectomy and the subsequent clinical onset of chronic kidney disease, and the potential correlations between chronic kidney disease and cardiovascular events and elevated mortality rates.17
In a landmark study, Huang et al18 evaluated the outcomes of 662 patients who had small renal masses, a “normal” serum creatinine concentration (≤ 124 μmol/L [1.4 mg/dL]), and a normal-appearing contralateral kidney who underwent radical or partial nephrectomy. Of these, 26% were found to have preexisting stage 3 chronic kidney disease (glomerular filtration rate < 60 mL/min/1.73 m2 as calculated using the Modification of Diet in Renal Disease equation). Additionally, 65% of patients treated with radical nephrectomy were found to have stage 3 chronic kidney disease after surgery vs 20% of patients managed with partial nephrectomy.
The misconception remains that the risk of chronic kidney disease after radical nephrectomy is insignificant, since the risk is low in renal transplant donors.19 However, renal transplant donors undergo stringent screening to ensure that their general health is good and that their renal function is robust, both of which are not true in many patients with small renal masses, particularly if they are elderly.
The overuse of radical nephrectomy is worrisome in light of the potential implications of chronic kidney disease, such as increased risk of morbid cardiovascular events and elevated mortality rates. Many experts believe that over-treatment of small renal masses may have contributed to the paradoxical increase in overall mortality rates observed with radical nephrectomy in some studies.4
Although radical nephrectomy remains an important treatment for locally advanced renal cell carcinoma, it should be performed for small renal masses only if nephron-sparing surgery is not feasible (Table 2).
Partial nephrectomy: The new gold standard for most patients
Over the last 5 years, greater emphasis has been placed on lessening the risk of chronic kidney disease in the management of all urologic conditions, including small renal masses.
The overuse of radical nephrectomy prompted the American Urological Association to commission a panel to provide guidelines for the management of clinical stage T1 renal masses.17 After an extensive review and rigorous meta-analysis, the panel concluded that partial nephrectomy is the gold standard for most patients (Table 1, Table 2).
Partial nephrectomy involves excision of the tumor with a small margin of normal tissue, preserving as much functional renal parenchyma as possible, followed by closure of the collecting system, suture ligation of any transected vessels, and reapproximation of the capsule. Tumor excision is usually performed during temporary occlusion of the renal vasculature, allowing for a bloodless field. Regional hypothermia (cold ischemia) can also be used to minimize ischemic injury.
Contemporary series have documented that partial and radical nephrectomy have comparable oncologic efficacy for patients with small renal masses.20,21 Local recurrence rates are only 1% to 2% with partial nephrectomy, and 5- and 10-year cancer-specific survival rates of 96% and 90% have been reported.22
Furthermore, some studies have shown that patients undergoing partial nephrectomy have higher overall survival rates than those managed with radical nephrectomy—perhaps in part due to greater preservation of renal function and a lower incidence of subsequent chronic kidney disease.23,24 At Cleveland Clinic, we are now studying the determinants of ultimate renal function after partial nephrectomy in an effort to minimize ischemic injury and optimize this technique.25
Complications. Partial nephrectomy does have a potential downside in that it carries a higher risk of urologic complications such as urine leak and postoperative hemorrhage, which is not surprising because it requires a reconstruction that must heal. In a recent meta-analysis, urologic complications occurred in 6.3% patients who underwent open partial nephrectomy and in 9.0% of patients who underwent laparoscopic partial nephrectomy.17 Fortunately, most complications associated with partial nephrectomy can be managed with conservative measures.
Postoperative bleeding occurs in about 1% to 2% of patients and is the most serious complication. However, it is typically managed with superselective embolization, which has a high success rate and facilitates renal preservation.
Urine leak occurs in about 3% to 5% of cases and almost always resolves with prolonged drainage, occasionally complemented with a ureteral stent to promote antegrade drainage.
A new refinement, robotic-assisted partial nephrectomy promises to reduce the morbidity of this procedure. This approach takes less time to learn than standard laparoscopic surgery and has expanded the indications for minimally invasive partial nephrectomy, although more-difficult cases are still better done through a traditional, open surgical approach.
Thermal ablation: Another minimally invasive option
Cryoablation and radiofrequency ablation (collectively called thermal ablation) have recently emerged as alternate minimally invasive treatments for small renal masses. They are appealing options for patients with small renal tumors (< 3.5 cm) who have significant comorbidities but still prefer a proactive approach. They can also be considered as salvage procedures in patients with local recurrence after partial nephrectomy or in select patients with multifocal disease.
Both procedures can be performed percutaneously or laparoscopically, offering the potential for rapid convalescence and reduced morbidity.26,27 A laparoscopic approach is necessary to mobilize the tumor from adjacent organs if they are juxtaposed, whereas a percutaneous approach is less invasive and is better suited for posterior renal masses.28 Renal mass sampling should be performed in these patients before treatment to define the histology and to guide surveillance and should be repeated postoperatively if there is suspicion of local recurrence based on imaging.
Cryoablation destroys tumor cells through rapid cycles of freezing to less than −20°C (−4°F) and thawing, which can be monitored in real time via thermocoupling (ie, a thermometer microprobe strategically placed outside the tumor to ensure that lethal temperatures are extended beyond the edge of the tumor) or via ultrasonography, or both. Treatment is continued until the “ice ball” extends about 1 cm beyond the edge of the tumor.
Initial series reported local tumor control rates in the range of 90% to 95%; however, follow-up was very limited.29 In a more robust single-institution experience,30 renal cryoablation demonstrated 5-year cancer-specific and recurrence-free survival rates of 93% and 83%, respectively, substantially lower than what would be expected with surgical excision in a similar patient population.
Another concern with cryoablation is that options are limited for surgical salvage if the initial treatment fails. Nguyen and Campbell31 reported that partial nephrectomy and minimally invasive surgery were often precluded in this situation because of the extensive fibrotic reaction caused by the prior treatment. If cryoablation fails, surgical salvage thus often requires open, radical surgery.
Radiofrequency ablation produces tumor coagulation via protein denaturation and disruption of cell membranes after heating tissues to temperatures above 50°C (122°F) for 4 to 6 minutes.32 One of its disadvantages is that one cannot monitor treatment progress in real time, as there is no identifiable change in tissue appearance analagous to the ice ball that is seen with cryoablation.
Although the outcomes of radiofrequency ablation are less robust than those of cryoablation, most studies suggest that local control is achieved in 80% to 90% of cases based on radiographic loss of enhancement after treatment.17,30,33 A recent meta-analysis comparing these treatments found that lesions treated with radiofrequency ablation had a significantly higher rate of local tumor progression than those treated with cryoablation (12.3% vs 4.7%, P < .0001).34 Both of these local recurrence rates are substantially higher than that seen after surgical excision, despite much shorter follow-up after thermal ablation.
Tempered enthusiasm. Because thermal ablation has been developed relatively recently, its long-term outcomes and treatment efficacy have not been well established, and current studies have confirmed higher local recurrence rates with thermal ablation than with surgical excision (Table 1). Furthermore, there are significant deficiencies in the literature about thermal ablation, including limited follow-up, lack of pathologic confirmation, and controversies regarding histologic or radiologic definitions of success (Table 2).
Although current enthusiasm for thermal ablation has been tempered by suboptimal results, further refinement in technique and acknowledgment of its limitations will help to define appropriate candidates for these treatments.
Active surveillance for select patients
In select patients with extensive medical comorbidities or short life expectancy, the risks associated with proactive management may outweigh the benefits, especially considering the indolent nature of many small renal masses. In such patients, active surveillance is reasonable.
A recent meta-analysis found that most small enhancing renal masses grew relatively slowly (median 0.28 cm/year) and posed a low risk of metastasis (1%–2%).17,22 Furthermore, almost all renal lesions that progressed to metastatic disease demonstrated rapid radiographic growth, suggesting that the radiographic growth of a renal mass under active surveillance may serve as an indicator for aggressive behavior.35
Unfortunately, the growth rates of small renal masses do not reliably predict malignancy, and one study reported that 83% of tumors without demonstrable growth were malignant.36
Studies of active surveillance to date have had several other important limitations. Many did not incorporate pathologic confirmation, so that about 20% of the tumors were actually benign, thus artificially reducing the risk of adverse outcomes.5,22,37 Furthermore, the follow-up has been short, with most studies including data for only 2 to 3 years, which is clearly inadequate for this type of malignancy.37,38 Finally, most series had significant selection bias towards small, homogenous masses. In general, small renal masses that appear to be more aggressive are treated and thus excluded from these surveillance populations (Table 2).
Another concern about active surveillance is the small but real risk of tumor progression to metastatic disease, rendering these patients incurable even with new, targeted molecular therapies. Additionally, some patients may lose their window of opportunity for nephron-sparing surgery if significant tumor growth occurs during observation, rendering partial nephrectomy unfeasible. Therefore, active surveillance is not advisable for young, otherwise healthy patients (Table 2).
In the future, advances in renal mass sampling with molecular profiling may help determine which renal lesions are less biologically aggressive and, thereby, help identify appropriate candidates for observation (Figure 2).
- Chow WH, Devesa SS. Contemporary epidemiology of renal cell cancer. Cancer J 2008; 14:288–301.
- Lane BR, Campbell SC. Management of small renal masses. AUA Update Series 2009; 28:313–324.
- Volpe A, Panzarella T, Rendon RA, Haider MA, Kondylis FI, Jewett MA. The natural history of incidentally detected small renal masses. Cancer 2004; 100:738–745.
- Hollingsworth JM, Miller DC, Daignault S, Hollenbeck BK. Rising incidence of small renal masses: a need to reassess treatment effect. J Natl Cancer Inst 2006; 98:1331–1334.
- Frank I, Blute ML, Cheville JC, Lohse CM, Weaver AL, Zincke H. Solid renal tumors: an analysis of pathological features related to tumor size. J Urol 2003; 170:2217–2220.
- Russo P. Should elective partial nephrectomy be performed for renal cell carcinoma >4 cm in size? Nat Clin Pract Urol 2008; 5:482–483.
- Thomas AA, Aron M, Hernandez AV, Lane BR, Gill IS. Laparoscopic partial nephrectomy in octogenarians. Urology 2009; 74:1042–1046.
- Thompson RH, Kurta JM, Kaag M, et al. Tumor size is associated with malignant potential in renal cell carcinoma cases. J Urol 2009; 181:2033–2036.
- Mues AC, Landman J. Small renal masses: current concepts regarding the natural history and reflections on the American Urological Association guidelines. Curr Opin Urol 2010; 20:105–110.
- Patard JJ, Bensalah K, Vincendeau S, Rioux-Leclerq N, Guillé F, Lobel B. [Correlation between the mode of presentation of renal tumors and patient survival]. Prog Urol 2003; 13:23–28.
- Rini BI, Campbell SC, Escudier B. Renal cell carcinoma. Lancet 2009; 373:1119–1132.
- Lane BR, Samplaski MK, Herts BR, Zhou M, Novick AC, Campbell SC. Renal mass biopsy—a renaissance? J Urol 2008; 179:20–27.
- Samplaski MK, Zhou M, Lane BR, Herts B, Campbell SC. Renal mass sampling: an enlightened perspective. Int J Urol 2011; 18:5–19.
- Tan MH, Rogers CG, Cooper JT, et al. Gene expression profiling of renal cell carcinoma. Clin Cancer Res 2004; 10:6315S–6321S.
- Miller DC, Hollingsworth JM, Hafez KS, Daignault S, Hollenbeck BK. Partial nephrectomy for small renal masses: an emerging quality of care concern? J Urol 2006; 175:853–857.
- Lane BR, Poggio ED, Herts BR, Novick AC, Campbell SC. Renal function assessment in the era of chronic kidney disease: renewed emphasis on renal function centered patient care. J Urol 2009; 182:435–444.
- Campbell SC, Novick AC, Belldegrun A, et al; Practice Guidelines Committee of the American Urological Association. Guideline for management of the clinical T1 renal mass. J Urol 2009; 182:1271–1279.
- Huang WC, Levey AS, Serio AM, et al. Chronic kidney disease after nephrectomy in patients with renal cortical tumours: a retrospective cohort study. Lancet Oncol 2006; 7:735–740.
- Boorjian SA, Uzzo RG. The evolving management of small renal masses. Curr Oncol Rep 2009; 11:211–217.
- Hafez KS, Fergany AF, Novick AC. Nephron sparing surgery for localized renal cell carcinoma: impact of tumor size on patient survival, tumor recurrence and TNM staging. J Urol 1999; 162:1930–1933.
- Lee CT, Katz J, Shi W, Thaler HT, Reuter VE, Russo P. Surgical management of renal tumors 4 cm. or less in a contemporary cohort. J Urol 2000; 163:730–736.
- Chawla SN, Crispen PL, Hanlon AL, Greenberg RE, Chen DY, Uzzo RG. The natural history of observed enhancing renal masses: meta-analysis and review of the world literature. J Urol 2006; 175:425–431.
- Huang WC, Elkin EB, Levey AS, Jang TL, Russo P. Partial nephrectomy versus radical nephrectomy in patients with small renal tumors—is there a difference in mortality and cardiovascular outcomes? J Urol 2009; 181:55–61.
- Thompson RH, Boorjian SA, Lohse CM, et al. Radical nephrectomy for pT1a renal masses may be associated with decreased overall survival compared with partial nephrectomy. J Urol 2008; 179:468–471.
- Thomas AA, Demirjian S, Lane BR, et al. Acute kidney injury: novel biomarkers and potential utility for patient care in urology. Urology 2011; 77:5–11.
- Hinshaw JL, Shadid AM, Nakada SY, Hedican SP, Winter TC, Lee FT. Comparison of percutaneous and laparoscopic cryoablation for the treatment of solid renal masses. AJR Am J Roentgenol 2008; 191:1159–1168.
- Sterrett SP, Nakada SY, Wingo MS, Williams SK, Leveillee RJ. Renal thermal ablative therapy. Urol Clin North Am 2008; 35:397–414.
- Hafron J, Kaouk JH. Ablative techniques for the management of kidney cancer. Nat Clin Pract Urol 2007; 4:261–269.
- Matin SF, Ahrar K. Nephron-sparing probe ablative therapy: longterm outcomes. Curr Opin Urol 2008; 18:150–156.
- Berger A, Kamoi K, Gill IS, Aron M. Cryoablation for renal tumors: current status. Curr Opin Urol 2009; 19:138–142.
- Nguyen CT, Campbell SC. Salvage of local recurrence after primary thermal ablation for small renal masses. Expert Rev Anticancer Ther 2008; 8:1899–1905.
- Goldberg SN, Gazelle GS, Mueller PR. Thermal ablation therapy for focal malignancy: a unified approach to underlying principles, techniques, and diagnostic imaging guidance. AJR Am J Roentgenol 2000; 174:323–331.
- Carraway WA, Raman JD, Cadeddu JA. Current status of renal radiofrequency ablation. Curr Opin Urol 2009; 19:143–147.
- Kunkle DA, Uzzo RG. Cryoablation or radiofrequency ablation of the small renal mass: a meta-analysis. Cancer 2008; 113:2671–2680.
- Kunkle DA, Kutikov A, Uzzo RG. Management of small renal masses. Semin Ultrasound CT MR 2009; 30:352–358.
- Kunkle DA, Crispen PL, Chen DY, Greenberg RE, Uzzo RG. Enhancing renal masses with zero net growth during active surveillance. J Urol 2007; 177:849–853.
- Kunkle DA, Egleston BL, Uzzo RG. Excise, ablate or observe: the small renal mass dilemma—a meta-analysis and review. J Urol 2008; 179:1227–1233.
- Jewett MA, Zuniga A. Renal tumor natural history: the rationale and role for active surveillance. Urol Clin North Am 2008; 35:627–634.
- Chow WH, Devesa SS. Contemporary epidemiology of renal cell cancer. Cancer J 2008; 14:288–301.
- Lane BR, Campbell SC. Management of small renal masses. AUA Update Series 2009; 28:313–324.
- Volpe A, Panzarella T, Rendon RA, Haider MA, Kondylis FI, Jewett MA. The natural history of incidentally detected small renal masses. Cancer 2004; 100:738–745.
- Hollingsworth JM, Miller DC, Daignault S, Hollenbeck BK. Rising incidence of small renal masses: a need to reassess treatment effect. J Natl Cancer Inst 2006; 98:1331–1334.
- Frank I, Blute ML, Cheville JC, Lohse CM, Weaver AL, Zincke H. Solid renal tumors: an analysis of pathological features related to tumor size. J Urol 2003; 170:2217–2220.
- Russo P. Should elective partial nephrectomy be performed for renal cell carcinoma >4 cm in size? Nat Clin Pract Urol 2008; 5:482–483.
- Thomas AA, Aron M, Hernandez AV, Lane BR, Gill IS. Laparoscopic partial nephrectomy in octogenarians. Urology 2009; 74:1042–1046.
- Thompson RH, Kurta JM, Kaag M, et al. Tumor size is associated with malignant potential in renal cell carcinoma cases. J Urol 2009; 181:2033–2036.
- Mues AC, Landman J. Small renal masses: current concepts regarding the natural history and reflections on the American Urological Association guidelines. Curr Opin Urol 2010; 20:105–110.
- Patard JJ, Bensalah K, Vincendeau S, Rioux-Leclerq N, Guillé F, Lobel B. [Correlation between the mode of presentation of renal tumors and patient survival]. Prog Urol 2003; 13:23–28.
- Rini BI, Campbell SC, Escudier B. Renal cell carcinoma. Lancet 2009; 373:1119–1132.
- Lane BR, Samplaski MK, Herts BR, Zhou M, Novick AC, Campbell SC. Renal mass biopsy—a renaissance? J Urol 2008; 179:20–27.
- Samplaski MK, Zhou M, Lane BR, Herts B, Campbell SC. Renal mass sampling: an enlightened perspective. Int J Urol 2011; 18:5–19.
- Tan MH, Rogers CG, Cooper JT, et al. Gene expression profiling of renal cell carcinoma. Clin Cancer Res 2004; 10:6315S–6321S.
- Miller DC, Hollingsworth JM, Hafez KS, Daignault S, Hollenbeck BK. Partial nephrectomy for small renal masses: an emerging quality of care concern? J Urol 2006; 175:853–857.
- Lane BR, Poggio ED, Herts BR, Novick AC, Campbell SC. Renal function assessment in the era of chronic kidney disease: renewed emphasis on renal function centered patient care. J Urol 2009; 182:435–444.
- Campbell SC, Novick AC, Belldegrun A, et al; Practice Guidelines Committee of the American Urological Association. Guideline for management of the clinical T1 renal mass. J Urol 2009; 182:1271–1279.
- Huang WC, Levey AS, Serio AM, et al. Chronic kidney disease after nephrectomy in patients with renal cortical tumours: a retrospective cohort study. Lancet Oncol 2006; 7:735–740.
- Boorjian SA, Uzzo RG. The evolving management of small renal masses. Curr Oncol Rep 2009; 11:211–217.
- Hafez KS, Fergany AF, Novick AC. Nephron sparing surgery for localized renal cell carcinoma: impact of tumor size on patient survival, tumor recurrence and TNM staging. J Urol 1999; 162:1930–1933.
- Lee CT, Katz J, Shi W, Thaler HT, Reuter VE, Russo P. Surgical management of renal tumors 4 cm. or less in a contemporary cohort. J Urol 2000; 163:730–736.
- Chawla SN, Crispen PL, Hanlon AL, Greenberg RE, Chen DY, Uzzo RG. The natural history of observed enhancing renal masses: meta-analysis and review of the world literature. J Urol 2006; 175:425–431.
- Huang WC, Elkin EB, Levey AS, Jang TL, Russo P. Partial nephrectomy versus radical nephrectomy in patients with small renal tumors—is there a difference in mortality and cardiovascular outcomes? J Urol 2009; 181:55–61.
- Thompson RH, Boorjian SA, Lohse CM, et al. Radical nephrectomy for pT1a renal masses may be associated with decreased overall survival compared with partial nephrectomy. J Urol 2008; 179:468–471.
- Thomas AA, Demirjian S, Lane BR, et al. Acute kidney injury: novel biomarkers and potential utility for patient care in urology. Urology 2011; 77:5–11.
- Hinshaw JL, Shadid AM, Nakada SY, Hedican SP, Winter TC, Lee FT. Comparison of percutaneous and laparoscopic cryoablation for the treatment of solid renal masses. AJR Am J Roentgenol 2008; 191:1159–1168.
- Sterrett SP, Nakada SY, Wingo MS, Williams SK, Leveillee RJ. Renal thermal ablative therapy. Urol Clin North Am 2008; 35:397–414.
- Hafron J, Kaouk JH. Ablative techniques for the management of kidney cancer. Nat Clin Pract Urol 2007; 4:261–269.
- Matin SF, Ahrar K. Nephron-sparing probe ablative therapy: longterm outcomes. Curr Opin Urol 2008; 18:150–156.
- Berger A, Kamoi K, Gill IS, Aron M. Cryoablation for renal tumors: current status. Curr Opin Urol 2009; 19:138–142.
- Nguyen CT, Campbell SC. Salvage of local recurrence after primary thermal ablation for small renal masses. Expert Rev Anticancer Ther 2008; 8:1899–1905.
- Goldberg SN, Gazelle GS, Mueller PR. Thermal ablation therapy for focal malignancy: a unified approach to underlying principles, techniques, and diagnostic imaging guidance. AJR Am J Roentgenol 2000; 174:323–331.
- Carraway WA, Raman JD, Cadeddu JA. Current status of renal radiofrequency ablation. Curr Opin Urol 2009; 19:143–147.
- Kunkle DA, Uzzo RG. Cryoablation or radiofrequency ablation of the small renal mass: a meta-analysis. Cancer 2008; 113:2671–2680.
- Kunkle DA, Kutikov A, Uzzo RG. Management of small renal masses. Semin Ultrasound CT MR 2009; 30:352–358.
- Kunkle DA, Crispen PL, Chen DY, Greenberg RE, Uzzo RG. Enhancing renal masses with zero net growth during active surveillance. J Urol 2007; 177:849–853.
- Kunkle DA, Egleston BL, Uzzo RG. Excise, ablate or observe: the small renal mass dilemma—a meta-analysis and review. J Urol 2008; 179:1227–1233.
- Jewett MA, Zuniga A. Renal tumor natural history: the rationale and role for active surveillance. Urol Clin North Am 2008; 35:627–634.
KEY POINTS
- Small renal masses are a heterogeneous group of tumors, and only 20% are aggressive renal cell carcinoma.
- In general, nephron-sparing treatments are preferred to avoid chronic kidney disease, which often occurs after radical nephrectomy.
- Thermal ablation and active surveillance are valid treatment strategies in select patients who are not optimal surgical candidates or who have limited life expectancy.
A 49-year-old woman with a persistent cough
A 49-year-old woman presents with a cough that has persisted for 3 weeks.
Two weeks ago, she was seen in the outpatient clinic for a nonproductive cough, rhinorrhea, sneezing, and a sore throat. At that time, she described coughing spells that were occasionally accompanied by posttussive chest pain and vomiting. The cough was worse at night and was occasionally associated with wheezing. She reported no fevers, chills, rigors, night sweats, or dyspnea. She said she has tried over-the-counter cough suppressants, antihistamines, and decongestants, but they provided no relief. Since she had a history of well-controlled asthma, she was diagnosed with an asthma exacerbation and was given prednisone 20 mg to take orally every day for 5 days, to be followed by an inhaled corticosteroid until her symptoms resolved.
Now, she has returned because her symptoms have persisted despite treatment, and she is seeking a second medical opinion. Her paroxysmal cough has become more frequent and more severe.
In addition to asthma, she has a history of allergic rhinitis. Her current medications include the over-the-counter histamine H1 antagonist cetirizine (Zyrtec), a fluticasone-salmeterol inhaler (Advair), and an albuterol inhaler (Proventil HFA). She reports having had mild asthma exacerbations in the past during the winter, which were managed well with her albuterol inhaler.
She has never smoked; she drinks alcohol socially. She has not traveled outside the United States during the past several months. She is married and has two children, ages 25 and 23. She lives at home with only her husband, and he has not been sick. However, she works at a greeting card store, and two of her coworkers have similar upper respiratory symptoms, although they have only a mild cough.
Her immunizations are not up-to-date. She last received the tetanus-diphtheria toxoid (Td) vaccine 12 years ago, and she never received the pediatric tetanus, diphtheria, and acellular pertussis (Tdap) vaccine. She generally receives the influenza vaccine annually, and she received it about 6 weeks before this presentation.
She is not in distress, but she has paroxysms of severe coughing throughout her examination. Her pulse is 100 beats per minute, respiratory rate 18, and blood pressure 130/86 mm Hg. Her oropharynx is clear. The pulmonary examination reveals poor inspiratory effort due to coughing but is otherwise normal. The rest of the examination is normal, as is her chest radiograph.
WHAT DOES SHE HAVE?
1. Which of the following would best explain her symptoms?
- Asthma
- Postviral cough
- Pertussis
- Chronic bronchitis
- Pneumonia
- Gastroesophageal reflux disease
Asthma is a reasonable consideration, given her medical history, her occasional wheezing, and her nonproductive cough that is worse at night. However, asthma typically responds well to corticosteroid therapy. She has already received a course of prednisone, but her symptoms have not improved.
Postviral cough could also be considered in this patient. However, postviral cough does not typically occur in paroxysms, nor does it lead to posttussive vomiting. It is also generally regarded as a diagnosis of exclusion.
Pertussis (whooping cough) should be suspected in this patient, given the time course of her symptoms, the paroxysmal cough, and the posttussive vomiting. In addition, at her job she interacts with hundreds of people a day, increasing her risk of exposure to respiratory tract pathogens, including Bordetella pertussis.
Chronic bronchitis is defined by cough (typically productive) lasting at least 3 months per year for at least 2 consecutive years, which does not fit the time course for this patient. It is vastly more common in smokers.
Pneumonia typically presents with a cough that can be productive or nonproductive, but also with fever, chills, and radiologic evidence of a pulmonary infiltrate or consolidation. This woman has none of these.
Gastroesophageal reflux disease is one of the most common causes of chronic cough, with symptoms typically worse at night. However, it is generally associated with symptoms such as heartburn, a sour taste in the mouth, or regurgitation, which our patient did not report.
Thus, pertussis is the most likely diagnosis.
PERTUSSIS IS ON THE RISE
Pertussis is an acute and highly contagious disease caused by infection of the respiratory tract by B pertussis, a small, aerobic, gramnegative, pleomorphic coccobacillus that produces a number of antigenic and biologically active products, including pertussis toxin, filamentous hemagglutinin, agglutinogens, and tracheal cytotoxin. Transmitted by aerosolized droplets, it attaches to the ciliated epithelial cells of the lower respiratory tract, paralyzes the cilia via toxins, and causes inflammation, thus interfering with the clearing of respiratory secretions.
The incidence of pertussis is on the rise. In 2005, 25,827 cases were reported in the United States, the highest number since 1959.1 Pertussis is now epidemic in California. At the time of this writing, the number of confirmed, probable, and suspected cases in California was 9,477 (including 10 infant deaths) for the year 2010—the most cases reported in the past 65 years.2,3
In 2010, outbreaks were also reported in Michigan, Texas, Ohio, upstate New York, and Arizona.4 The overall incidence of pertussis is likely even higher than what is reported, since many cases go unrecognized or unreported.
Highly contagious
Pertussis is transmitted person-to-person, primarily through aerosolized droplets from coughing or sneezing or by direct contact with secretions from the respiratory tract of infected persons. It is highly contagious, with secondary attack rates of up to 80% in susceptible people.
A three-stage clinical course
The clinical definition of pertussis used by the US Centers for Disease Control and Prevention (CDC) and the Council of State and Territorial Epidemiologists is an acute cough illness lasting at least 2 weeks, with paroxysms of coughing, an inspiratory “whoop,” or posttussive vomiting without another apparent cause.5
The clinical course of the illness is traditionally divided into three stages:
The catarrhal phase typically lasts 1 to 2 weeks and is clinically indistinguishable from a viral upper respiratory infection. It is characterized by the insidious onset of malaise, coryza, sneezing, low-grade fever, and a mild cough that gradually becomes severe.6
The paroxysmal phase normally lasts 1 to 6 weeks but may persist for up to 10 weeks. The diagnosis of pertussis is usually suspected during this phase. The classic features of this phase are bursts or paroxysms of numerous, rapid coughs. These are followed by a long inspiratory effort usually accompanied by a characteristic high-pitched whoop, most notably observed in infants and children. Infants and children may appear very ill and distressed during this time and may become cyanotic, but cyanosis is uncommon in adults and adolescents. The paroxysms may also be followed by exhaustion and posttussive vomiting. In some cases, the cough is not paroxysmal, but rather simply persistent. The coughing attacks tend to occur more often at night, with an average of 15 attacks per 24 hours. During the first 1 to 2 weeks of this stage, the attacks generally increase in frequency, remain at the same intensity level for 2 to 3 weeks, and then gradually decrease over 1 to 2 weeks.1,7
The convalescent phase can have a variable course, ranging from weeks to months, with an average duration of 2 to 3 weeks. During this stage, the paroxysms of coughing become less frequent and gradually resolve. Paroxysms often recur with subsequent respiratory infections.
In infants and young children, pertussis tends to follow these stages in a predictable sequence. Adolescents and adults, however, tend to go through the stages without being as ill and typically do not exhibit the characteristic whoop.
TESTING FOR PERTUSSIS
2. Which would be the test of choice to confirm pertussis in this patient?
- Bacterial culture of nasopharyngeal secretions
- Polymerase chain reaction (PCR) testing of nasopharyngeal secretions
- Direct fluorescent antibody testing of nasopharyngeal secretions
- Enzyme-linked immunosorbent assay (ELISA) serologic testing
Establishing the diagnosis of pertussis is often rather challenging.
Bacterial culture: Very specific, but slow and not so sensitive
Bacterial culture is still the gold standard for diagnosing pertussis, as a positive culture for B pertussis is 100% specific.5
However, this test has drawbacks. Its sensitivity has a wide range (15% to 80%) and depends very much on the time from the onset of symptoms to the time the culture specimen is collected. The yield drops off significantly after 1 week, and after 3 weeks the test has a sensitivity of only 1% to 3%.8 Therefore, for our patient, who has had symptoms for 3 weeks already, bacterial culture would not be the best test. In addition, the results are usually not known for 7 to 14 days, which is too slow to be useful in managing acute cases.
For swabbing, a Dacron swab is inserted through the nostril to the posterior pharynx and is left in place for 10 seconds to maximize the yield of the specimen. Recovery rates for B pertussis are low if the throat or the anterior nasal passage is swabbed instead of the posterior pharynx.9
Nasopharyngeal aspiration is a more complicated procedure, requiring a suction device to trap the mucus, but it may provide higher yields than swabbing.10 In this method, the specimen is obtained by inserting a small tube (eg, an infant feeding tube) connected to a mucus trap into the nostril back to the posterior pharynx.
Often, direct inoculation of medium for B pertussis is not possible. In such cases, clinical specimens are placed in Regan Lowe transport medium (half-strength charcoal agar supplemented with horse blood and cephalexin).11,12
Polymerase chain reaction testing: Faster, more sensitive, but less specific
PCR testing of nasopharyngeal specimens is now being used instead of bacterial culture to diagnose pertussis in many situations. Alternatively, nasopharyngeal aspirate (or secretions collected with two Dacron swabs) can be obtained and divided at the time of collection and the specimens sent for both culture and PCR testing. Because bacterial culture is time-consuming and has poor sensitivity, the CDC states that a positive PCR test, along with the clinical symptoms and epidemiologic information, is sufficient for diagnosis.5
PCR testing can detect B pertussis with greater sensitivity and more rapidly than bacterial culture.12–14 Its sensitivity ranges from 61% to 99%, its specificity ranges from 88% to 98%,12,15,16 and its results can be available in 2 to 24 hours.12
PCR testing’s advantage in terms of sensitivity is especially pronounced in the later stages of the disease (as in our patient), when clinical suspicion of pertussis typically arises. It can be used effectively for up to 4 weeks from the onset of cough.14 Our patient, who presented nearly 3 weeks after the onset of symptoms, underwent nasopharyngeal sampling for PCR testing.
However, PCR testing is not as specific for B pertussis as is bacterial culture, since other Bordetella species can cause positive results on PCR testing. Also, as with culture, a negative test does not reliably rule out the disease, especially if the sample is collected late in the course.
Therefore, basing the diagnosis on PCR testing alone without the proper clinical context is not advised: pertussis outbreaks have been mistakenly declared on the basis of false-positive PCR test results. Three so-called “pertussis outbreaks” in three different states from 2004 to 200617 were largely the result of overdiagnosis based on equivocal or false-positive PCR test results without the appropriate clinical circumstances. Retrospective review of these pseudo-outbreaks revealed that few cases actually met the CDC’s diagnostic criteria.17 Many patients were not tested (by any method) for pertussis and were treated as having probable cases of pertussis on the basis of their symptoms. Patients who were tested and who had a positive PCR test did not meet the clinical definition of pertussis according to the Council of State and Territorial Epidemiologists.17
Since PCR testing varies in sensitivity and specificity, obtaining culture confirmation of pertussis for at least one suspicious case is recommended any time an outbreak is suspected. This is necessary for monitoring for continued presence of the agent among cases of disease, recruitment of isolates for epidemiologic studies, and surveillance for antibiotic resistance.
Direct fluorescence antibody testing
The CDC does not recommend direct fluorescence antibody testing to diagnose pertussis. This test is commercially available and is sometimes used to screen patients for B pertussis infection, but it lacks sensitivity and specificity for this organism. Cross-reaction with normal nasopharyngeal flora can lead to a false-positive result.18 In addition, the interpretation of the test is subjective, so the sensitivity and specificity are quite variable: the sensitivity is reported as 52% to 65%, while the specificity can vary from 15% to 99%.
Enzyme-linked immunosorbent assay
ELISA testing has been used in epidemiologic studies to measure serum antibodies to B pertussis. Many serologic tests exist, but none is commercially available. Many of these tests are used by the CDC and state health departments to help confirm the diagnosis, especially during outbreaks. Generally, serologic tests are more useful for diagnosis in later phases of the disease. Currently used ELISA tests use both paired and single serology techniques measuring elevated immunoglobulin G serum antibody concentrations against an array of antigens, including pertussis toxin, filamentous hemagglutinin, pertactin, and fimbrae. As a result, a range of sensitivities (33%–95%) and specificities (72%–100%) has been reported.12,14,19
TREATING PERTUSSIS
Our patient’s PCR test result comes back positive. In view of her symptoms and this result, we decide to treat her empirically for pertussis, even though she has had no known contact with anyone with the disease and there is currently no outbreak of it in the community.
3. According to the most recent evidence, which of the following would be the treatment of choice for pertussis in this patient?
- Azithromycin (Zithromax)
- Amoxicillin (Moxatag)
- Levofloxacin (Levaquin)
- Sulfamethoxazole-trimethoprim (Bactrim)
- Supportive measures (hydration, humidifier, antitussives, antihistamines, decongestants)
Azithromycin and the other macrolide antibiotics erythromycin and clarithromycin are first-line therapies for pertussis in adolescents and adults. If given during the catarrhal phase, they can reduce the duration and severity of symptoms and lessen the period of communicability.20,21 After the catarrhal phase, however, it is uncertain whether antibiotics change the clinical course of pertussis, as the data are conflicting.20–22
Factors to consider when selecting a macrolide antibiotic are tolerability, the potential for adverse events and drug interactions, ease of compliance, and cost. All three macrolides are equally effective against pertussis, but azithromycin and clarithromycin are generally better tolerated and are associated with milder and less frequent side effects than erythromycin, including lower rates of gastrointestinal side effects.
Erythromycin and clarithromycin inhibit the cytochrome P450 enzyme system, specifically CYP3A4, and can interact with a great many commonly prescribed drugs metabolized by this enzyme. Therefore, azithromycin may be a better choice for patients already taking other medications, like our patient.
Azithromycin and clarithromycin have longer half-lives and achieve higher tissue concentrations than erythromycin, allowing for less-frequent dosing (daily for azithromycin and twice daily for clarithromycin) and shorter treatment duration (5 days for azithromycin and 7 days for clarithromycin).
An advantage of erythromycin, though, is its lower cost. The cost of a recommended course of erythromycin treatment for pertussis (ie, 500 mg every 6 hours for 14 days) is roughly $20, compared with $75 for azithromycin.
Amoxicillin is not effective in clearing B pertussis from the nasopharynx and thus is not a reasonable option for the treatment of pertussis.23
Levofloxacin is also not recommended for the treatment of pertussis.
Sulfamethoxazole-trimethoprim is a second-line agent for pertussis. It is effective in eradicating B pertussis from the nasopharynx20 and is generally used as an alternative to the macrolide agents in patients who cannot tolerate or have contraindications to macrolides. Sulfamethoxazole-trimethoprim can also be an option for patients infected with rare macrolide-resistant strains of B pertussis.
Supportive measures by themselves are reasonable for patients with pertussis beyond the catarrhal phase, since antibiotics are typically not effective at that stage of the disease.
From 80% to 90% of patients with untreated pertussis spontaneously clear the bacteria from the nasopharynx within 3 to 4 weeks from the onset of cough symptoms.20 However, supportive measures, including antitussives (both over-the-counter and prescription), tend to have very little effect on the severity or duration of the illness, especially when used past the early stage of the illness.
POSTEXPOSURE CHEMOPROPHYLAXIS FOR CLOSE CONTACTS
Postexposure chemoprophylaxis should be given to close contacts of patients who have pertussis to help prevent secondary cases.22 The CDC defines a close contact as someone who has had face-to-face exposure within 3 feet of a symptomatic patient within 21 days after the onset of symptoms in the patient. Close contacts should be treated with antibiotic regimens similar to those used in confirmed cases of pertussis.
In our patient’s case, the diagnosis of pertussis was reported to the Ohio Department of Health. Shortly afterward, the department contacted the patient and obtained information about her close contacts. These people were then contacted and encouraged to complete a course of antibiotics for postexposure chemoprophylaxis, given the high secondary attack rates.
PERTUSSIS VACCINES
4. Which of the following vaccines could have reduced our patient’s chance of contracting the disease or reduced the severity or time course of the illness?
- DTaP
- Tdap
- Whole-cell pertussis vaccine
- No vaccine would have reduced her risk
It is important to prevent pertussis, given its associated morbidities and its generally poor response to drug therapy. Continued vigilance is imperative to maintain high levels of vaccine coverage, including the timely completion of the pertussis vaccination schedule.
The two vaccines in current use in the United States to produce immunity to pertussis—DTaP and Tdap—also confer immunity to diphtheria and tetanus. DTaP is used for children under 7 years of age, and Tdap is for ages 10 to 64. Thus, our patient should have received a series of DTaP injections as an infant and small child, and a Tdap booster at age 11 or 12 years and every 10 years after that.
The upper case “D,” “T,” and “P” in the abbreviations signifies full-strength doses and the lower case “d,” “t,” and “p” indicate that the doses of those components have been reduced. The “a” in both vaccines stands for “acellular”: ie, the pertussis component does not contain cellular elements.
DTaP for initial pertussis vaccination
The current recommendation for initial pertussis vaccination consists of a primary series of DTaP. DTaP vaccination is recommended for infants at 2 months of age, then again at 4 months of age, and again at 6 months of age. A fourth dose is given between the ages of 15 and 18 months, and a fifth dose is given between the ages of 4 to 6 years. If the fourth dose was given after age 4, then no fifth dose is needed.20
Tdap as a booster
The booster vaccine for adolescents and adults is Tdap. In 2005, two Tdap vaccines were licensed in the United States: Adacel for people ages 11 to 64 years, and Boostrix for people ages 10 to 18 years.
The CDC’s Advisory Committee on Immunization Practices (ACIP) recommends a booster dose of Tdap at age 11 or 12 years. Every 10 years thereafter, a booster of tetanus and diphtheria toxoid (Td) vaccine is recommended, except that one of the Td doses can be replaced by Tdap if the patient hasn’t received Tdap yet.
For adults ages 19 to 64, the ACIP currently recommends routine use of a single booster dose of Tdap to replace a single dose of Td if they received the last dose of toxoid vaccine 10 or more years earlier. If the previous dose of Td was given within the past 10 years, a single dose of Tdap is appropriate to protect patients against pertussis. This is especially true for patients at increased risk of pertussis or its complications, as well as for health care professionals and adults who have close contact with infants, such as new parents, grandparents, and child-care providers. The minimum interval since the last Td vaccination is ideally 2 years, although shorter intervals can be used for control of pertussis outbreaks and for those who have close contact with infants.24
In 2010, the ACIP decided that, for those ages 65 and older, a single dose of Tdap vaccine may be given in place of Td if the patient has not previously received Tdap, regardless of how much time has elapsed since the last vaccination with a Td-containing vaccine.25 Data from the Vaccine Adverse Event Reporting System suggest that Tdap vaccine in this age group is as safe as the Td vaccine.25
Subsequent tetanus vaccine doses, in the form of Td, should be given at 10-year intervals throughout adulthood. Administration of Tdap at 10-year intervals appears to be highly immunogenic and well tolerated,25 suggesting that it is possible that Tdap will become part of routine booster dosing instead of Td, pending further study.
Tdap is not contraindicated in pregnant women. Ideally, women should be vaccinated with Tdap before becoming pregnant if they have not previously received it. If the pregnant woman is not at risk of acquiring or transmitting pertussis during pregnancy, the ACIP recommends deferring Tdap vaccination until the immediate postpartum period.
Adults who require a vaccine containing tetanus toxoid for wound management should receive Tdap instead of Td if they have never received Tdap. Adults who have never received vaccine containing tetanus and diphtheria toxoid should receive a series of three vaccinations. The preferred schedule is a dose of Tdap, followed by a dose of Td more than 4 weeks later, and a second dose of Td 6 to 12 months later, though Tdap can be substituted for Td for any one of the three doses in the series. Adults with a history of pertussis generally should receive Tdap according to routine recommendations.
Tdap is contraindicated in people with a history of serious allergic reaction to any component of the Tdap vaccine or with a history of encephalopathy not attributable to an identifiable cause within 7 days of receiving a pertussis vaccine. Tdap is relatively contraindicated and should be deferred in people with current moderate to severe acute illness, current unstable neurologic condition, or a history of Arthus hypersensitivity reaction to a tetanus-toxoid-containing vaccine within the past 10 years, and in people who have developed Guillain-Barré syndrome, within 6 weeks of receiving a tetanus-toxoid–containing vaccine.
Tdap is generally well tolerated. Adverse effects are typically mild and may include localized pain, redness, and swelling; low-grade fever; headache; fatigue; and, less commonly, gastrointestinal upset, myalgia, arthralgia, rash, and swollen glands.
Whole-cell pertussis vaccine is no longer available in the United States
Whole-cell pertussis vaccine provides good protection against pertussis, with 70% to 90% efficacy after three doses. It is less expensive-than acellular formulations and therefore is used in many parts of the world where cost is an issue. It is no longer available in the United States, however, due to high rates of local reactions such as redness, swelling, and pain at the injection site.
The importance of staying up-to-date with booster shots
Booster vaccination for pertussis in adolescents and adults is critical, since the largest recent outbreaks have occurred in these groups.21 The high rate of outbreaks is presumably the result of waning immunity from childhood immunizations and of high interpersonal contact rates. Vaccination has been shown to reduce the chance of contracting the disease and to reduce the severity and time course of the illness.21
Adolescents and adults are an important reservoir for potentially serious infections in infants who are either unvaccinated or whose vaccination schedule has not been completed. These infants are at risk of severe illness, including pneumonia, seizures, encephalopathy, and apnea, or even death. Adults and teens can also suffer complications from pertussis, although these tend to be less serious, especially in those who have been vaccinated. Complications in teens and adults are often caused by malaise and the cough itself, including weight loss (33%), urinary stress incontinence (28%), syncope (6%), rib fractures from severe coughing (4%), and pneumonia (2%).26 Thus, it is important that adolescents and adults stay up-to-date with pertussis vaccination.
CASE CONTINUED
Our patient was treated with a short (5-day) course of azithromycin 500 mg daily. It did not improve her symptoms very much, but this was not unexpected, given her late presentation and duration of symptoms. Her cough persisted for about 2 months afterwards, but it improved with time and with supportive care at home.
CONTINUED CHALLENGES
Pertussis is a reemerging disease with an increased incidence over the past 30 years, and even more so over the past 10 years. Unfortunately, treatments are not very effective, especially since the disease is often diagnosed late in the course.
We are fortunate to have a vaccine that can prevent pertussis, yet pertussis persists, in large part because of waning immunity from childhood vaccination. The duration of immunity from childhood vaccination is not yet clear. Many adolescents and adults do not follow up on these booster vaccines, thus increasing their susceptibility to pertussis. Consequently, they can transmit the disease to children who are not fully immunized. Prevention by maintaining active immunity is the key to controlling this terrible disease.
- Centers for Disease Control and Prevention. Pertussis. National Immunization Program, 2005. http://www.cdc.gov/vaccines/pubs/pinkbook/downloads/pert.pdf. Accessed July 6, 2011.
- California Department of Public Health. Pertussis report. www.cdph.ca.gov/programs/immunize/Documents/PertussisReport2011-01-07.pdf. Accessed July 6, 2011.
- Centers for Disease Control and Prevention. Pertussis (whooping cough). www.cdc.gov/pertussis/outbreaks.html. Accessed July 3, 2011.
- Centers for Disease Control and Prevention. Notifiable diseases and mortality tables. MMWR Morb Mortal Wkly Rep 2010; 59:847–861. http://www.cdc.gov/mmwr/PDF/wk/mm5927.pdf. Accessed July 1, 2011.
- Centers for Disease Control and Prevention. Pertussis. Vaccines and preventable diseases: pertussis (whooping cough) vaccination, 2010. http://www.cdc.gov/vaccines/vpd-vac/pertussis/default.htm. Accessed July 6, 2011.
- Hewlett EL, Edwards KM. Clinical practice. Pertussis—not just for kids. N Engl J Med 2005; 352:1215–1222.
- Hewlett E. Bordetella species. In: Mandell GL, Bennett JE, Dolin R, editors. Principles and Practice of Infectious Diseases. 5th ed, Philadelphia, PA: Churchill Livingstone; 2000:2701.
- Viljanen MK, Ruuskanen O, Granberg C, Salmi TT. Serological diagnosis of pertussis: IgM, IgA and IgG antibodies against Bordetella pertussis measured by enzyme-linked immunosorbent assay (ELISA). Scand J Infect Dis 1982; 14:117–122.
- Bejuk D, Begovac J, Bace A, Kuzmanovic-Sterk N, Aleraj B. Culture of Bordetella pertussis from three upper respiratory tract specimens. Pediatr Infect Dis J 1995; 14:64–65.
- Hallander HO, Reizenstein E, Renemar B, Rasmuson G, Mardin L, Olin P. Comparison of nasopharyngeal aspirates with swabs for culture of Bordetella pertussis. J Clin Microbiol 1993; 31:50–52.
- Regan J, Lowe F. Enrichment medium for the isolation of Bordetella. J Clin Microbiol 1977; 6:303–309.
- World Health Organization. Laboratory manual for the diagnosis of whooping cough caused by Bordetella pertussis/Bordetella para-pertussis. Department of Immunization, Vaccines and Biologicals. Printed 2004. Revised 2007. www.who.int/vaccines-documents/. Accessed July 6, 2011.
- Meade BD, Bollen A. Recommendations for use of the polymerase chain reaction in the diagnosis of Bordetella pertussis infections. J Med Microbiol 1994; 41:51–55.
- Wendelboe AM, Van Rie A. Diagnosis of pertussis: a historical review and recent developments. Expert Rev Mol Diagn 2006; 6:857–864.
- Knorr L, Fox JD, Tilley PA, Ahmed-Bentley J. Evaluation of real-time PCR for diagnosis of Bordetella pertussis infection. BMC Infect Dis 2006; 6:62.
- Sotir MJ, Cappozzo DL, Warshauer DM, et al. Evaluation of polymerase chain reaction and culture for diagnosis of pertussis in the control of a county-wide outbreak focused among adolescents and adults. Clin Infect Dis 2007; 44:1216–1219.
- Centers for Disease Control and Prevention (CDC). Outbreaks of respiratory illness mistakenly attributed to pertussis—New Hampshire, Massachusetts, and Tennessee, 2004–2006. MMWR Morb Mortal Wkly Rep 2007; 56:837–842.
- Ewanowich CA, Chui LW, Paranchych MG, Peppler MS, Marusyk RG, Albritton WL. Major outbreak of pertussis in northern Alberta, Canada: analysis of discrepant direct fluorescent-antibody and culture results by using polymerase chain reaction methodology. J Clin Microbiol 1993; 31:1715–1725.
- Müller FM, Hoppe JE, Wirsing von König CH. Laboratory diagnosis of pertussis: state of the art in 1997. J Clin Microbiol 1997; 35:2435–2443.
- Tiwari T, Murphy TV, Moran J; National Immunization Program, CDC. Recommended antimicrobial agents for the treatment and postexposure prophylaxis of pertussis: 2005 CDC Guidelines. MMWR Recomm Rep 2005; 54:1–16.
- Wirsing von König CH, Postels-Multani S, Bock HL, Schmitt HJ. Pertussis in adults: frequency of transmission after household exposure. Lancet 1995; 346:1326–1329.
- von König CH. Use of antibiotics in the prevention and treatment of pertussis. Pediatr Infect Dis J 2005; 24(suppl 5):S66–S68.
- Trollfors B. Effect of erythromycin and amoxycillin on Bordetella pertussis in the nasopharynx. Infection 1978; 6:228–230.
- Broder KR, Cortese MM, Iskander JK, et al; Advisory Committee on Immunization Practices (ACIP). Preventing tetanus, diphtheria, and pertussis among adolescents: use of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccines recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep 2006; 55:1–34.
- Centers for Disease Control and Prevention. Recommendations and Guidelines. ACIP presentation slides: October 2010 meeting. http://www.cdc.gov/vaccines/recs/acip/slides-oct10.htm. Accessed July 6, 2011.
- Cortese MM, Bisgard KM. Pertussis. In:Wallace RB, Kohatsu N, Last JM, editors. Wallace/Maxcy-Rosenau-Last Public Health & Preventive Medicine. 15th ed. New York, NY: McGraw-Hill Medical, 2008:111–114.
A 49-year-old woman presents with a cough that has persisted for 3 weeks.
Two weeks ago, she was seen in the outpatient clinic for a nonproductive cough, rhinorrhea, sneezing, and a sore throat. At that time, she described coughing spells that were occasionally accompanied by posttussive chest pain and vomiting. The cough was worse at night and was occasionally associated with wheezing. She reported no fevers, chills, rigors, night sweats, or dyspnea. She said she has tried over-the-counter cough suppressants, antihistamines, and decongestants, but they provided no relief. Since she had a history of well-controlled asthma, she was diagnosed with an asthma exacerbation and was given prednisone 20 mg to take orally every day for 5 days, to be followed by an inhaled corticosteroid until her symptoms resolved.
Now, she has returned because her symptoms have persisted despite treatment, and she is seeking a second medical opinion. Her paroxysmal cough has become more frequent and more severe.
In addition to asthma, she has a history of allergic rhinitis. Her current medications include the over-the-counter histamine H1 antagonist cetirizine (Zyrtec), a fluticasone-salmeterol inhaler (Advair), and an albuterol inhaler (Proventil HFA). She reports having had mild asthma exacerbations in the past during the winter, which were managed well with her albuterol inhaler.
She has never smoked; she drinks alcohol socially. She has not traveled outside the United States during the past several months. She is married and has two children, ages 25 and 23. She lives at home with only her husband, and he has not been sick. However, she works at a greeting card store, and two of her coworkers have similar upper respiratory symptoms, although they have only a mild cough.
Her immunizations are not up-to-date. She last received the tetanus-diphtheria toxoid (Td) vaccine 12 years ago, and she never received the pediatric tetanus, diphtheria, and acellular pertussis (Tdap) vaccine. She generally receives the influenza vaccine annually, and she received it about 6 weeks before this presentation.
She is not in distress, but she has paroxysms of severe coughing throughout her examination. Her pulse is 100 beats per minute, respiratory rate 18, and blood pressure 130/86 mm Hg. Her oropharynx is clear. The pulmonary examination reveals poor inspiratory effort due to coughing but is otherwise normal. The rest of the examination is normal, as is her chest radiograph.
WHAT DOES SHE HAVE?
1. Which of the following would best explain her symptoms?
- Asthma
- Postviral cough
- Pertussis
- Chronic bronchitis
- Pneumonia
- Gastroesophageal reflux disease
Asthma is a reasonable consideration, given her medical history, her occasional wheezing, and her nonproductive cough that is worse at night. However, asthma typically responds well to corticosteroid therapy. She has already received a course of prednisone, but her symptoms have not improved.
Postviral cough could also be considered in this patient. However, postviral cough does not typically occur in paroxysms, nor does it lead to posttussive vomiting. It is also generally regarded as a diagnosis of exclusion.
Pertussis (whooping cough) should be suspected in this patient, given the time course of her symptoms, the paroxysmal cough, and the posttussive vomiting. In addition, at her job she interacts with hundreds of people a day, increasing her risk of exposure to respiratory tract pathogens, including Bordetella pertussis.
Chronic bronchitis is defined by cough (typically productive) lasting at least 3 months per year for at least 2 consecutive years, which does not fit the time course for this patient. It is vastly more common in smokers.
Pneumonia typically presents with a cough that can be productive or nonproductive, but also with fever, chills, and radiologic evidence of a pulmonary infiltrate or consolidation. This woman has none of these.
Gastroesophageal reflux disease is one of the most common causes of chronic cough, with symptoms typically worse at night. However, it is generally associated with symptoms such as heartburn, a sour taste in the mouth, or regurgitation, which our patient did not report.
Thus, pertussis is the most likely diagnosis.
PERTUSSIS IS ON THE RISE
Pertussis is an acute and highly contagious disease caused by infection of the respiratory tract by B pertussis, a small, aerobic, gramnegative, pleomorphic coccobacillus that produces a number of antigenic and biologically active products, including pertussis toxin, filamentous hemagglutinin, agglutinogens, and tracheal cytotoxin. Transmitted by aerosolized droplets, it attaches to the ciliated epithelial cells of the lower respiratory tract, paralyzes the cilia via toxins, and causes inflammation, thus interfering with the clearing of respiratory secretions.
The incidence of pertussis is on the rise. In 2005, 25,827 cases were reported in the United States, the highest number since 1959.1 Pertussis is now epidemic in California. At the time of this writing, the number of confirmed, probable, and suspected cases in California was 9,477 (including 10 infant deaths) for the year 2010—the most cases reported in the past 65 years.2,3
In 2010, outbreaks were also reported in Michigan, Texas, Ohio, upstate New York, and Arizona.4 The overall incidence of pertussis is likely even higher than what is reported, since many cases go unrecognized or unreported.
Highly contagious
Pertussis is transmitted person-to-person, primarily through aerosolized droplets from coughing or sneezing or by direct contact with secretions from the respiratory tract of infected persons. It is highly contagious, with secondary attack rates of up to 80% in susceptible people.
A three-stage clinical course
The clinical definition of pertussis used by the US Centers for Disease Control and Prevention (CDC) and the Council of State and Territorial Epidemiologists is an acute cough illness lasting at least 2 weeks, with paroxysms of coughing, an inspiratory “whoop,” or posttussive vomiting without another apparent cause.5
The clinical course of the illness is traditionally divided into three stages:
The catarrhal phase typically lasts 1 to 2 weeks and is clinically indistinguishable from a viral upper respiratory infection. It is characterized by the insidious onset of malaise, coryza, sneezing, low-grade fever, and a mild cough that gradually becomes severe.6
The paroxysmal phase normally lasts 1 to 6 weeks but may persist for up to 10 weeks. The diagnosis of pertussis is usually suspected during this phase. The classic features of this phase are bursts or paroxysms of numerous, rapid coughs. These are followed by a long inspiratory effort usually accompanied by a characteristic high-pitched whoop, most notably observed in infants and children. Infants and children may appear very ill and distressed during this time and may become cyanotic, but cyanosis is uncommon in adults and adolescents. The paroxysms may also be followed by exhaustion and posttussive vomiting. In some cases, the cough is not paroxysmal, but rather simply persistent. The coughing attacks tend to occur more often at night, with an average of 15 attacks per 24 hours. During the first 1 to 2 weeks of this stage, the attacks generally increase in frequency, remain at the same intensity level for 2 to 3 weeks, and then gradually decrease over 1 to 2 weeks.1,7
The convalescent phase can have a variable course, ranging from weeks to months, with an average duration of 2 to 3 weeks. During this stage, the paroxysms of coughing become less frequent and gradually resolve. Paroxysms often recur with subsequent respiratory infections.
In infants and young children, pertussis tends to follow these stages in a predictable sequence. Adolescents and adults, however, tend to go through the stages without being as ill and typically do not exhibit the characteristic whoop.
TESTING FOR PERTUSSIS
2. Which would be the test of choice to confirm pertussis in this patient?
- Bacterial culture of nasopharyngeal secretions
- Polymerase chain reaction (PCR) testing of nasopharyngeal secretions
- Direct fluorescent antibody testing of nasopharyngeal secretions
- Enzyme-linked immunosorbent assay (ELISA) serologic testing
Establishing the diagnosis of pertussis is often rather challenging.
Bacterial culture: Very specific, but slow and not so sensitive
Bacterial culture is still the gold standard for diagnosing pertussis, as a positive culture for B pertussis is 100% specific.5
However, this test has drawbacks. Its sensitivity has a wide range (15% to 80%) and depends very much on the time from the onset of symptoms to the time the culture specimen is collected. The yield drops off significantly after 1 week, and after 3 weeks the test has a sensitivity of only 1% to 3%.8 Therefore, for our patient, who has had symptoms for 3 weeks already, bacterial culture would not be the best test. In addition, the results are usually not known for 7 to 14 days, which is too slow to be useful in managing acute cases.
For swabbing, a Dacron swab is inserted through the nostril to the posterior pharynx and is left in place for 10 seconds to maximize the yield of the specimen. Recovery rates for B pertussis are low if the throat or the anterior nasal passage is swabbed instead of the posterior pharynx.9
Nasopharyngeal aspiration is a more complicated procedure, requiring a suction device to trap the mucus, but it may provide higher yields than swabbing.10 In this method, the specimen is obtained by inserting a small tube (eg, an infant feeding tube) connected to a mucus trap into the nostril back to the posterior pharynx.
Often, direct inoculation of medium for B pertussis is not possible. In such cases, clinical specimens are placed in Regan Lowe transport medium (half-strength charcoal agar supplemented with horse blood and cephalexin).11,12
Polymerase chain reaction testing: Faster, more sensitive, but less specific
PCR testing of nasopharyngeal specimens is now being used instead of bacterial culture to diagnose pertussis in many situations. Alternatively, nasopharyngeal aspirate (or secretions collected with two Dacron swabs) can be obtained and divided at the time of collection and the specimens sent for both culture and PCR testing. Because bacterial culture is time-consuming and has poor sensitivity, the CDC states that a positive PCR test, along with the clinical symptoms and epidemiologic information, is sufficient for diagnosis.5
PCR testing can detect B pertussis with greater sensitivity and more rapidly than bacterial culture.12–14 Its sensitivity ranges from 61% to 99%, its specificity ranges from 88% to 98%,12,15,16 and its results can be available in 2 to 24 hours.12
PCR testing’s advantage in terms of sensitivity is especially pronounced in the later stages of the disease (as in our patient), when clinical suspicion of pertussis typically arises. It can be used effectively for up to 4 weeks from the onset of cough.14 Our patient, who presented nearly 3 weeks after the onset of symptoms, underwent nasopharyngeal sampling for PCR testing.
However, PCR testing is not as specific for B pertussis as is bacterial culture, since other Bordetella species can cause positive results on PCR testing. Also, as with culture, a negative test does not reliably rule out the disease, especially if the sample is collected late in the course.
Therefore, basing the diagnosis on PCR testing alone without the proper clinical context is not advised: pertussis outbreaks have been mistakenly declared on the basis of false-positive PCR test results. Three so-called “pertussis outbreaks” in three different states from 2004 to 200617 were largely the result of overdiagnosis based on equivocal or false-positive PCR test results without the appropriate clinical circumstances. Retrospective review of these pseudo-outbreaks revealed that few cases actually met the CDC’s diagnostic criteria.17 Many patients were not tested (by any method) for pertussis and were treated as having probable cases of pertussis on the basis of their symptoms. Patients who were tested and who had a positive PCR test did not meet the clinical definition of pertussis according to the Council of State and Territorial Epidemiologists.17
Since PCR testing varies in sensitivity and specificity, obtaining culture confirmation of pertussis for at least one suspicious case is recommended any time an outbreak is suspected. This is necessary for monitoring for continued presence of the agent among cases of disease, recruitment of isolates for epidemiologic studies, and surveillance for antibiotic resistance.
Direct fluorescence antibody testing
The CDC does not recommend direct fluorescence antibody testing to diagnose pertussis. This test is commercially available and is sometimes used to screen patients for B pertussis infection, but it lacks sensitivity and specificity for this organism. Cross-reaction with normal nasopharyngeal flora can lead to a false-positive result.18 In addition, the interpretation of the test is subjective, so the sensitivity and specificity are quite variable: the sensitivity is reported as 52% to 65%, while the specificity can vary from 15% to 99%.
Enzyme-linked immunosorbent assay
ELISA testing has been used in epidemiologic studies to measure serum antibodies to B pertussis. Many serologic tests exist, but none is commercially available. Many of these tests are used by the CDC and state health departments to help confirm the diagnosis, especially during outbreaks. Generally, serologic tests are more useful for diagnosis in later phases of the disease. Currently used ELISA tests use both paired and single serology techniques measuring elevated immunoglobulin G serum antibody concentrations against an array of antigens, including pertussis toxin, filamentous hemagglutinin, pertactin, and fimbrae. As a result, a range of sensitivities (33%–95%) and specificities (72%–100%) has been reported.12,14,19
TREATING PERTUSSIS
Our patient’s PCR test result comes back positive. In view of her symptoms and this result, we decide to treat her empirically for pertussis, even though she has had no known contact with anyone with the disease and there is currently no outbreak of it in the community.
3. According to the most recent evidence, which of the following would be the treatment of choice for pertussis in this patient?
- Azithromycin (Zithromax)
- Amoxicillin (Moxatag)
- Levofloxacin (Levaquin)
- Sulfamethoxazole-trimethoprim (Bactrim)
- Supportive measures (hydration, humidifier, antitussives, antihistamines, decongestants)
Azithromycin and the other macrolide antibiotics erythromycin and clarithromycin are first-line therapies for pertussis in adolescents and adults. If given during the catarrhal phase, they can reduce the duration and severity of symptoms and lessen the period of communicability.20,21 After the catarrhal phase, however, it is uncertain whether antibiotics change the clinical course of pertussis, as the data are conflicting.20–22
Factors to consider when selecting a macrolide antibiotic are tolerability, the potential for adverse events and drug interactions, ease of compliance, and cost. All three macrolides are equally effective against pertussis, but azithromycin and clarithromycin are generally better tolerated and are associated with milder and less frequent side effects than erythromycin, including lower rates of gastrointestinal side effects.
Erythromycin and clarithromycin inhibit the cytochrome P450 enzyme system, specifically CYP3A4, and can interact with a great many commonly prescribed drugs metabolized by this enzyme. Therefore, azithromycin may be a better choice for patients already taking other medications, like our patient.
Azithromycin and clarithromycin have longer half-lives and achieve higher tissue concentrations than erythromycin, allowing for less-frequent dosing (daily for azithromycin and twice daily for clarithromycin) and shorter treatment duration (5 days for azithromycin and 7 days for clarithromycin).
An advantage of erythromycin, though, is its lower cost. The cost of a recommended course of erythromycin treatment for pertussis (ie, 500 mg every 6 hours for 14 days) is roughly $20, compared with $75 for azithromycin.
Amoxicillin is not effective in clearing B pertussis from the nasopharynx and thus is not a reasonable option for the treatment of pertussis.23
Levofloxacin is also not recommended for the treatment of pertussis.
Sulfamethoxazole-trimethoprim is a second-line agent for pertussis. It is effective in eradicating B pertussis from the nasopharynx20 and is generally used as an alternative to the macrolide agents in patients who cannot tolerate or have contraindications to macrolides. Sulfamethoxazole-trimethoprim can also be an option for patients infected with rare macrolide-resistant strains of B pertussis.
Supportive measures by themselves are reasonable for patients with pertussis beyond the catarrhal phase, since antibiotics are typically not effective at that stage of the disease.
From 80% to 90% of patients with untreated pertussis spontaneously clear the bacteria from the nasopharynx within 3 to 4 weeks from the onset of cough symptoms.20 However, supportive measures, including antitussives (both over-the-counter and prescription), tend to have very little effect on the severity or duration of the illness, especially when used past the early stage of the illness.
POSTEXPOSURE CHEMOPROPHYLAXIS FOR CLOSE CONTACTS
Postexposure chemoprophylaxis should be given to close contacts of patients who have pertussis to help prevent secondary cases.22 The CDC defines a close contact as someone who has had face-to-face exposure within 3 feet of a symptomatic patient within 21 days after the onset of symptoms in the patient. Close contacts should be treated with antibiotic regimens similar to those used in confirmed cases of pertussis.
In our patient’s case, the diagnosis of pertussis was reported to the Ohio Department of Health. Shortly afterward, the department contacted the patient and obtained information about her close contacts. These people were then contacted and encouraged to complete a course of antibiotics for postexposure chemoprophylaxis, given the high secondary attack rates.
PERTUSSIS VACCINES
4. Which of the following vaccines could have reduced our patient’s chance of contracting the disease or reduced the severity or time course of the illness?
- DTaP
- Tdap
- Whole-cell pertussis vaccine
- No vaccine would have reduced her risk
It is important to prevent pertussis, given its associated morbidities and its generally poor response to drug therapy. Continued vigilance is imperative to maintain high levels of vaccine coverage, including the timely completion of the pertussis vaccination schedule.
The two vaccines in current use in the United States to produce immunity to pertussis—DTaP and Tdap—also confer immunity to diphtheria and tetanus. DTaP is used for children under 7 years of age, and Tdap is for ages 10 to 64. Thus, our patient should have received a series of DTaP injections as an infant and small child, and a Tdap booster at age 11 or 12 years and every 10 years after that.
The upper case “D,” “T,” and “P” in the abbreviations signifies full-strength doses and the lower case “d,” “t,” and “p” indicate that the doses of those components have been reduced. The “a” in both vaccines stands for “acellular”: ie, the pertussis component does not contain cellular elements.
DTaP for initial pertussis vaccination
The current recommendation for initial pertussis vaccination consists of a primary series of DTaP. DTaP vaccination is recommended for infants at 2 months of age, then again at 4 months of age, and again at 6 months of age. A fourth dose is given between the ages of 15 and 18 months, and a fifth dose is given between the ages of 4 to 6 years. If the fourth dose was given after age 4, then no fifth dose is needed.20
Tdap as a booster
The booster vaccine for adolescents and adults is Tdap. In 2005, two Tdap vaccines were licensed in the United States: Adacel for people ages 11 to 64 years, and Boostrix for people ages 10 to 18 years.
The CDC’s Advisory Committee on Immunization Practices (ACIP) recommends a booster dose of Tdap at age 11 or 12 years. Every 10 years thereafter, a booster of tetanus and diphtheria toxoid (Td) vaccine is recommended, except that one of the Td doses can be replaced by Tdap if the patient hasn’t received Tdap yet.
For adults ages 19 to 64, the ACIP currently recommends routine use of a single booster dose of Tdap to replace a single dose of Td if they received the last dose of toxoid vaccine 10 or more years earlier. If the previous dose of Td was given within the past 10 years, a single dose of Tdap is appropriate to protect patients against pertussis. This is especially true for patients at increased risk of pertussis or its complications, as well as for health care professionals and adults who have close contact with infants, such as new parents, grandparents, and child-care providers. The minimum interval since the last Td vaccination is ideally 2 years, although shorter intervals can be used for control of pertussis outbreaks and for those who have close contact with infants.24
In 2010, the ACIP decided that, for those ages 65 and older, a single dose of Tdap vaccine may be given in place of Td if the patient has not previously received Tdap, regardless of how much time has elapsed since the last vaccination with a Td-containing vaccine.25 Data from the Vaccine Adverse Event Reporting System suggest that Tdap vaccine in this age group is as safe as the Td vaccine.25
Subsequent tetanus vaccine doses, in the form of Td, should be given at 10-year intervals throughout adulthood. Administration of Tdap at 10-year intervals appears to be highly immunogenic and well tolerated,25 suggesting that it is possible that Tdap will become part of routine booster dosing instead of Td, pending further study.
Tdap is not contraindicated in pregnant women. Ideally, women should be vaccinated with Tdap before becoming pregnant if they have not previously received it. If the pregnant woman is not at risk of acquiring or transmitting pertussis during pregnancy, the ACIP recommends deferring Tdap vaccination until the immediate postpartum period.
Adults who require a vaccine containing tetanus toxoid for wound management should receive Tdap instead of Td if they have never received Tdap. Adults who have never received vaccine containing tetanus and diphtheria toxoid should receive a series of three vaccinations. The preferred schedule is a dose of Tdap, followed by a dose of Td more than 4 weeks later, and a second dose of Td 6 to 12 months later, though Tdap can be substituted for Td for any one of the three doses in the series. Adults with a history of pertussis generally should receive Tdap according to routine recommendations.
Tdap is contraindicated in people with a history of serious allergic reaction to any component of the Tdap vaccine or with a history of encephalopathy not attributable to an identifiable cause within 7 days of receiving a pertussis vaccine. Tdap is relatively contraindicated and should be deferred in people with current moderate to severe acute illness, current unstable neurologic condition, or a history of Arthus hypersensitivity reaction to a tetanus-toxoid-containing vaccine within the past 10 years, and in people who have developed Guillain-Barré syndrome, within 6 weeks of receiving a tetanus-toxoid–containing vaccine.
Tdap is generally well tolerated. Adverse effects are typically mild and may include localized pain, redness, and swelling; low-grade fever; headache; fatigue; and, less commonly, gastrointestinal upset, myalgia, arthralgia, rash, and swollen glands.
Whole-cell pertussis vaccine is no longer available in the United States
Whole-cell pertussis vaccine provides good protection against pertussis, with 70% to 90% efficacy after three doses. It is less expensive-than acellular formulations and therefore is used in many parts of the world where cost is an issue. It is no longer available in the United States, however, due to high rates of local reactions such as redness, swelling, and pain at the injection site.
The importance of staying up-to-date with booster shots
Booster vaccination for pertussis in adolescents and adults is critical, since the largest recent outbreaks have occurred in these groups.21 The high rate of outbreaks is presumably the result of waning immunity from childhood immunizations and of high interpersonal contact rates. Vaccination has been shown to reduce the chance of contracting the disease and to reduce the severity and time course of the illness.21
Adolescents and adults are an important reservoir for potentially serious infections in infants who are either unvaccinated or whose vaccination schedule has not been completed. These infants are at risk of severe illness, including pneumonia, seizures, encephalopathy, and apnea, or even death. Adults and teens can also suffer complications from pertussis, although these tend to be less serious, especially in those who have been vaccinated. Complications in teens and adults are often caused by malaise and the cough itself, including weight loss (33%), urinary stress incontinence (28%), syncope (6%), rib fractures from severe coughing (4%), and pneumonia (2%).26 Thus, it is important that adolescents and adults stay up-to-date with pertussis vaccination.
CASE CONTINUED
Our patient was treated with a short (5-day) course of azithromycin 500 mg daily. It did not improve her symptoms very much, but this was not unexpected, given her late presentation and duration of symptoms. Her cough persisted for about 2 months afterwards, but it improved with time and with supportive care at home.
CONTINUED CHALLENGES
Pertussis is a reemerging disease with an increased incidence over the past 30 years, and even more so over the past 10 years. Unfortunately, treatments are not very effective, especially since the disease is often diagnosed late in the course.
We are fortunate to have a vaccine that can prevent pertussis, yet pertussis persists, in large part because of waning immunity from childhood vaccination. The duration of immunity from childhood vaccination is not yet clear. Many adolescents and adults do not follow up on these booster vaccines, thus increasing their susceptibility to pertussis. Consequently, they can transmit the disease to children who are not fully immunized. Prevention by maintaining active immunity is the key to controlling this terrible disease.
A 49-year-old woman presents with a cough that has persisted for 3 weeks.
Two weeks ago, she was seen in the outpatient clinic for a nonproductive cough, rhinorrhea, sneezing, and a sore throat. At that time, she described coughing spells that were occasionally accompanied by posttussive chest pain and vomiting. The cough was worse at night and was occasionally associated with wheezing. She reported no fevers, chills, rigors, night sweats, or dyspnea. She said she has tried over-the-counter cough suppressants, antihistamines, and decongestants, but they provided no relief. Since she had a history of well-controlled asthma, she was diagnosed with an asthma exacerbation and was given prednisone 20 mg to take orally every day for 5 days, to be followed by an inhaled corticosteroid until her symptoms resolved.
Now, she has returned because her symptoms have persisted despite treatment, and she is seeking a second medical opinion. Her paroxysmal cough has become more frequent and more severe.
In addition to asthma, she has a history of allergic rhinitis. Her current medications include the over-the-counter histamine H1 antagonist cetirizine (Zyrtec), a fluticasone-salmeterol inhaler (Advair), and an albuterol inhaler (Proventil HFA). She reports having had mild asthma exacerbations in the past during the winter, which were managed well with her albuterol inhaler.
She has never smoked; she drinks alcohol socially. She has not traveled outside the United States during the past several months. She is married and has two children, ages 25 and 23. She lives at home with only her husband, and he has not been sick. However, she works at a greeting card store, and two of her coworkers have similar upper respiratory symptoms, although they have only a mild cough.
Her immunizations are not up-to-date. She last received the tetanus-diphtheria toxoid (Td) vaccine 12 years ago, and she never received the pediatric tetanus, diphtheria, and acellular pertussis (Tdap) vaccine. She generally receives the influenza vaccine annually, and she received it about 6 weeks before this presentation.
She is not in distress, but she has paroxysms of severe coughing throughout her examination. Her pulse is 100 beats per minute, respiratory rate 18, and blood pressure 130/86 mm Hg. Her oropharynx is clear. The pulmonary examination reveals poor inspiratory effort due to coughing but is otherwise normal. The rest of the examination is normal, as is her chest radiograph.
WHAT DOES SHE HAVE?
1. Which of the following would best explain her symptoms?
- Asthma
- Postviral cough
- Pertussis
- Chronic bronchitis
- Pneumonia
- Gastroesophageal reflux disease
Asthma is a reasonable consideration, given her medical history, her occasional wheezing, and her nonproductive cough that is worse at night. However, asthma typically responds well to corticosteroid therapy. She has already received a course of prednisone, but her symptoms have not improved.
Postviral cough could also be considered in this patient. However, postviral cough does not typically occur in paroxysms, nor does it lead to posttussive vomiting. It is also generally regarded as a diagnosis of exclusion.
Pertussis (whooping cough) should be suspected in this patient, given the time course of her symptoms, the paroxysmal cough, and the posttussive vomiting. In addition, at her job she interacts with hundreds of people a day, increasing her risk of exposure to respiratory tract pathogens, including Bordetella pertussis.
Chronic bronchitis is defined by cough (typically productive) lasting at least 3 months per year for at least 2 consecutive years, which does not fit the time course for this patient. It is vastly more common in smokers.
Pneumonia typically presents with a cough that can be productive or nonproductive, but also with fever, chills, and radiologic evidence of a pulmonary infiltrate or consolidation. This woman has none of these.
Gastroesophageal reflux disease is one of the most common causes of chronic cough, with symptoms typically worse at night. However, it is generally associated with symptoms such as heartburn, a sour taste in the mouth, or regurgitation, which our patient did not report.
Thus, pertussis is the most likely diagnosis.
PERTUSSIS IS ON THE RISE
Pertussis is an acute and highly contagious disease caused by infection of the respiratory tract by B pertussis, a small, aerobic, gramnegative, pleomorphic coccobacillus that produces a number of antigenic and biologically active products, including pertussis toxin, filamentous hemagglutinin, agglutinogens, and tracheal cytotoxin. Transmitted by aerosolized droplets, it attaches to the ciliated epithelial cells of the lower respiratory tract, paralyzes the cilia via toxins, and causes inflammation, thus interfering with the clearing of respiratory secretions.
The incidence of pertussis is on the rise. In 2005, 25,827 cases were reported in the United States, the highest number since 1959.1 Pertussis is now epidemic in California. At the time of this writing, the number of confirmed, probable, and suspected cases in California was 9,477 (including 10 infant deaths) for the year 2010—the most cases reported in the past 65 years.2,3
In 2010, outbreaks were also reported in Michigan, Texas, Ohio, upstate New York, and Arizona.4 The overall incidence of pertussis is likely even higher than what is reported, since many cases go unrecognized or unreported.
Highly contagious
Pertussis is transmitted person-to-person, primarily through aerosolized droplets from coughing or sneezing or by direct contact with secretions from the respiratory tract of infected persons. It is highly contagious, with secondary attack rates of up to 80% in susceptible people.
A three-stage clinical course
The clinical definition of pertussis used by the US Centers for Disease Control and Prevention (CDC) and the Council of State and Territorial Epidemiologists is an acute cough illness lasting at least 2 weeks, with paroxysms of coughing, an inspiratory “whoop,” or posttussive vomiting without another apparent cause.5
The clinical course of the illness is traditionally divided into three stages:
The catarrhal phase typically lasts 1 to 2 weeks and is clinically indistinguishable from a viral upper respiratory infection. It is characterized by the insidious onset of malaise, coryza, sneezing, low-grade fever, and a mild cough that gradually becomes severe.6
The paroxysmal phase normally lasts 1 to 6 weeks but may persist for up to 10 weeks. The diagnosis of pertussis is usually suspected during this phase. The classic features of this phase are bursts or paroxysms of numerous, rapid coughs. These are followed by a long inspiratory effort usually accompanied by a characteristic high-pitched whoop, most notably observed in infants and children. Infants and children may appear very ill and distressed during this time and may become cyanotic, but cyanosis is uncommon in adults and adolescents. The paroxysms may also be followed by exhaustion and posttussive vomiting. In some cases, the cough is not paroxysmal, but rather simply persistent. The coughing attacks tend to occur more often at night, with an average of 15 attacks per 24 hours. During the first 1 to 2 weeks of this stage, the attacks generally increase in frequency, remain at the same intensity level for 2 to 3 weeks, and then gradually decrease over 1 to 2 weeks.1,7
The convalescent phase can have a variable course, ranging from weeks to months, with an average duration of 2 to 3 weeks. During this stage, the paroxysms of coughing become less frequent and gradually resolve. Paroxysms often recur with subsequent respiratory infections.
In infants and young children, pertussis tends to follow these stages in a predictable sequence. Adolescents and adults, however, tend to go through the stages without being as ill and typically do not exhibit the characteristic whoop.
TESTING FOR PERTUSSIS
2. Which would be the test of choice to confirm pertussis in this patient?
- Bacterial culture of nasopharyngeal secretions
- Polymerase chain reaction (PCR) testing of nasopharyngeal secretions
- Direct fluorescent antibody testing of nasopharyngeal secretions
- Enzyme-linked immunosorbent assay (ELISA) serologic testing
Establishing the diagnosis of pertussis is often rather challenging.
Bacterial culture: Very specific, but slow and not so sensitive
Bacterial culture is still the gold standard for diagnosing pertussis, as a positive culture for B pertussis is 100% specific.5
However, this test has drawbacks. Its sensitivity has a wide range (15% to 80%) and depends very much on the time from the onset of symptoms to the time the culture specimen is collected. The yield drops off significantly after 1 week, and after 3 weeks the test has a sensitivity of only 1% to 3%.8 Therefore, for our patient, who has had symptoms for 3 weeks already, bacterial culture would not be the best test. In addition, the results are usually not known for 7 to 14 days, which is too slow to be useful in managing acute cases.
For swabbing, a Dacron swab is inserted through the nostril to the posterior pharynx and is left in place for 10 seconds to maximize the yield of the specimen. Recovery rates for B pertussis are low if the throat or the anterior nasal passage is swabbed instead of the posterior pharynx.9
Nasopharyngeal aspiration is a more complicated procedure, requiring a suction device to trap the mucus, but it may provide higher yields than swabbing.10 In this method, the specimen is obtained by inserting a small tube (eg, an infant feeding tube) connected to a mucus trap into the nostril back to the posterior pharynx.
Often, direct inoculation of medium for B pertussis is not possible. In such cases, clinical specimens are placed in Regan Lowe transport medium (half-strength charcoal agar supplemented with horse blood and cephalexin).11,12
Polymerase chain reaction testing: Faster, more sensitive, but less specific
PCR testing of nasopharyngeal specimens is now being used instead of bacterial culture to diagnose pertussis in many situations. Alternatively, nasopharyngeal aspirate (or secretions collected with two Dacron swabs) can be obtained and divided at the time of collection and the specimens sent for both culture and PCR testing. Because bacterial culture is time-consuming and has poor sensitivity, the CDC states that a positive PCR test, along with the clinical symptoms and epidemiologic information, is sufficient for diagnosis.5
PCR testing can detect B pertussis with greater sensitivity and more rapidly than bacterial culture.12–14 Its sensitivity ranges from 61% to 99%, its specificity ranges from 88% to 98%,12,15,16 and its results can be available in 2 to 24 hours.12
PCR testing’s advantage in terms of sensitivity is especially pronounced in the later stages of the disease (as in our patient), when clinical suspicion of pertussis typically arises. It can be used effectively for up to 4 weeks from the onset of cough.14 Our patient, who presented nearly 3 weeks after the onset of symptoms, underwent nasopharyngeal sampling for PCR testing.
However, PCR testing is not as specific for B pertussis as is bacterial culture, since other Bordetella species can cause positive results on PCR testing. Also, as with culture, a negative test does not reliably rule out the disease, especially if the sample is collected late in the course.
Therefore, basing the diagnosis on PCR testing alone without the proper clinical context is not advised: pertussis outbreaks have been mistakenly declared on the basis of false-positive PCR test results. Three so-called “pertussis outbreaks” in three different states from 2004 to 200617 were largely the result of overdiagnosis based on equivocal or false-positive PCR test results without the appropriate clinical circumstances. Retrospective review of these pseudo-outbreaks revealed that few cases actually met the CDC’s diagnostic criteria.17 Many patients were not tested (by any method) for pertussis and were treated as having probable cases of pertussis on the basis of their symptoms. Patients who were tested and who had a positive PCR test did not meet the clinical definition of pertussis according to the Council of State and Territorial Epidemiologists.17
Since PCR testing varies in sensitivity and specificity, obtaining culture confirmation of pertussis for at least one suspicious case is recommended any time an outbreak is suspected. This is necessary for monitoring for continued presence of the agent among cases of disease, recruitment of isolates for epidemiologic studies, and surveillance for antibiotic resistance.
Direct fluorescence antibody testing
The CDC does not recommend direct fluorescence antibody testing to diagnose pertussis. This test is commercially available and is sometimes used to screen patients for B pertussis infection, but it lacks sensitivity and specificity for this organism. Cross-reaction with normal nasopharyngeal flora can lead to a false-positive result.18 In addition, the interpretation of the test is subjective, so the sensitivity and specificity are quite variable: the sensitivity is reported as 52% to 65%, while the specificity can vary from 15% to 99%.
Enzyme-linked immunosorbent assay
ELISA testing has been used in epidemiologic studies to measure serum antibodies to B pertussis. Many serologic tests exist, but none is commercially available. Many of these tests are used by the CDC and state health departments to help confirm the diagnosis, especially during outbreaks. Generally, serologic tests are more useful for diagnosis in later phases of the disease. Currently used ELISA tests use both paired and single serology techniques measuring elevated immunoglobulin G serum antibody concentrations against an array of antigens, including pertussis toxin, filamentous hemagglutinin, pertactin, and fimbrae. As a result, a range of sensitivities (33%–95%) and specificities (72%–100%) has been reported.12,14,19
TREATING PERTUSSIS
Our patient’s PCR test result comes back positive. In view of her symptoms and this result, we decide to treat her empirically for pertussis, even though she has had no known contact with anyone with the disease and there is currently no outbreak of it in the community.
3. According to the most recent evidence, which of the following would be the treatment of choice for pertussis in this patient?
- Azithromycin (Zithromax)
- Amoxicillin (Moxatag)
- Levofloxacin (Levaquin)
- Sulfamethoxazole-trimethoprim (Bactrim)
- Supportive measures (hydration, humidifier, antitussives, antihistamines, decongestants)
Azithromycin and the other macrolide antibiotics erythromycin and clarithromycin are first-line therapies for pertussis in adolescents and adults. If given during the catarrhal phase, they can reduce the duration and severity of symptoms and lessen the period of communicability.20,21 After the catarrhal phase, however, it is uncertain whether antibiotics change the clinical course of pertussis, as the data are conflicting.20–22
Factors to consider when selecting a macrolide antibiotic are tolerability, the potential for adverse events and drug interactions, ease of compliance, and cost. All three macrolides are equally effective against pertussis, but azithromycin and clarithromycin are generally better tolerated and are associated with milder and less frequent side effects than erythromycin, including lower rates of gastrointestinal side effects.
Erythromycin and clarithromycin inhibit the cytochrome P450 enzyme system, specifically CYP3A4, and can interact with a great many commonly prescribed drugs metabolized by this enzyme. Therefore, azithromycin may be a better choice for patients already taking other medications, like our patient.
Azithromycin and clarithromycin have longer half-lives and achieve higher tissue concentrations than erythromycin, allowing for less-frequent dosing (daily for azithromycin and twice daily for clarithromycin) and shorter treatment duration (5 days for azithromycin and 7 days for clarithromycin).
An advantage of erythromycin, though, is its lower cost. The cost of a recommended course of erythromycin treatment for pertussis (ie, 500 mg every 6 hours for 14 days) is roughly $20, compared with $75 for azithromycin.
Amoxicillin is not effective in clearing B pertussis from the nasopharynx and thus is not a reasonable option for the treatment of pertussis.23
Levofloxacin is also not recommended for the treatment of pertussis.
Sulfamethoxazole-trimethoprim is a second-line agent for pertussis. It is effective in eradicating B pertussis from the nasopharynx20 and is generally used as an alternative to the macrolide agents in patients who cannot tolerate or have contraindications to macrolides. Sulfamethoxazole-trimethoprim can also be an option for patients infected with rare macrolide-resistant strains of B pertussis.
Supportive measures by themselves are reasonable for patients with pertussis beyond the catarrhal phase, since antibiotics are typically not effective at that stage of the disease.
From 80% to 90% of patients with untreated pertussis spontaneously clear the bacteria from the nasopharynx within 3 to 4 weeks from the onset of cough symptoms.20 However, supportive measures, including antitussives (both over-the-counter and prescription), tend to have very little effect on the severity or duration of the illness, especially when used past the early stage of the illness.
POSTEXPOSURE CHEMOPROPHYLAXIS FOR CLOSE CONTACTS
Postexposure chemoprophylaxis should be given to close contacts of patients who have pertussis to help prevent secondary cases.22 The CDC defines a close contact as someone who has had face-to-face exposure within 3 feet of a symptomatic patient within 21 days after the onset of symptoms in the patient. Close contacts should be treated with antibiotic regimens similar to those used in confirmed cases of pertussis.
In our patient’s case, the diagnosis of pertussis was reported to the Ohio Department of Health. Shortly afterward, the department contacted the patient and obtained information about her close contacts. These people were then contacted and encouraged to complete a course of antibiotics for postexposure chemoprophylaxis, given the high secondary attack rates.
PERTUSSIS VACCINES
4. Which of the following vaccines could have reduced our patient’s chance of contracting the disease or reduced the severity or time course of the illness?
- DTaP
- Tdap
- Whole-cell pertussis vaccine
- No vaccine would have reduced her risk
It is important to prevent pertussis, given its associated morbidities and its generally poor response to drug therapy. Continued vigilance is imperative to maintain high levels of vaccine coverage, including the timely completion of the pertussis vaccination schedule.
The two vaccines in current use in the United States to produce immunity to pertussis—DTaP and Tdap—also confer immunity to diphtheria and tetanus. DTaP is used for children under 7 years of age, and Tdap is for ages 10 to 64. Thus, our patient should have received a series of DTaP injections as an infant and small child, and a Tdap booster at age 11 or 12 years and every 10 years after that.
The upper case “D,” “T,” and “P” in the abbreviations signifies full-strength doses and the lower case “d,” “t,” and “p” indicate that the doses of those components have been reduced. The “a” in both vaccines stands for “acellular”: ie, the pertussis component does not contain cellular elements.
DTaP for initial pertussis vaccination
The current recommendation for initial pertussis vaccination consists of a primary series of DTaP. DTaP vaccination is recommended for infants at 2 months of age, then again at 4 months of age, and again at 6 months of age. A fourth dose is given between the ages of 15 and 18 months, and a fifth dose is given between the ages of 4 to 6 years. If the fourth dose was given after age 4, then no fifth dose is needed.20
Tdap as a booster
The booster vaccine for adolescents and adults is Tdap. In 2005, two Tdap vaccines were licensed in the United States: Adacel for people ages 11 to 64 years, and Boostrix for people ages 10 to 18 years.
The CDC’s Advisory Committee on Immunization Practices (ACIP) recommends a booster dose of Tdap at age 11 or 12 years. Every 10 years thereafter, a booster of tetanus and diphtheria toxoid (Td) vaccine is recommended, except that one of the Td doses can be replaced by Tdap if the patient hasn’t received Tdap yet.
For adults ages 19 to 64, the ACIP currently recommends routine use of a single booster dose of Tdap to replace a single dose of Td if they received the last dose of toxoid vaccine 10 or more years earlier. If the previous dose of Td was given within the past 10 years, a single dose of Tdap is appropriate to protect patients against pertussis. This is especially true for patients at increased risk of pertussis or its complications, as well as for health care professionals and adults who have close contact with infants, such as new parents, grandparents, and child-care providers. The minimum interval since the last Td vaccination is ideally 2 years, although shorter intervals can be used for control of pertussis outbreaks and for those who have close contact with infants.24
In 2010, the ACIP decided that, for those ages 65 and older, a single dose of Tdap vaccine may be given in place of Td if the patient has not previously received Tdap, regardless of how much time has elapsed since the last vaccination with a Td-containing vaccine.25 Data from the Vaccine Adverse Event Reporting System suggest that Tdap vaccine in this age group is as safe as the Td vaccine.25
Subsequent tetanus vaccine doses, in the form of Td, should be given at 10-year intervals throughout adulthood. Administration of Tdap at 10-year intervals appears to be highly immunogenic and well tolerated,25 suggesting that it is possible that Tdap will become part of routine booster dosing instead of Td, pending further study.
Tdap is not contraindicated in pregnant women. Ideally, women should be vaccinated with Tdap before becoming pregnant if they have not previously received it. If the pregnant woman is not at risk of acquiring or transmitting pertussis during pregnancy, the ACIP recommends deferring Tdap vaccination until the immediate postpartum period.
Adults who require a vaccine containing tetanus toxoid for wound management should receive Tdap instead of Td if they have never received Tdap. Adults who have never received vaccine containing tetanus and diphtheria toxoid should receive a series of three vaccinations. The preferred schedule is a dose of Tdap, followed by a dose of Td more than 4 weeks later, and a second dose of Td 6 to 12 months later, though Tdap can be substituted for Td for any one of the three doses in the series. Adults with a history of pertussis generally should receive Tdap according to routine recommendations.
Tdap is contraindicated in people with a history of serious allergic reaction to any component of the Tdap vaccine or with a history of encephalopathy not attributable to an identifiable cause within 7 days of receiving a pertussis vaccine. Tdap is relatively contraindicated and should be deferred in people with current moderate to severe acute illness, current unstable neurologic condition, or a history of Arthus hypersensitivity reaction to a tetanus-toxoid-containing vaccine within the past 10 years, and in people who have developed Guillain-Barré syndrome, within 6 weeks of receiving a tetanus-toxoid–containing vaccine.
Tdap is generally well tolerated. Adverse effects are typically mild and may include localized pain, redness, and swelling; low-grade fever; headache; fatigue; and, less commonly, gastrointestinal upset, myalgia, arthralgia, rash, and swollen glands.
Whole-cell pertussis vaccine is no longer available in the United States
Whole-cell pertussis vaccine provides good protection against pertussis, with 70% to 90% efficacy after three doses. It is less expensive-than acellular formulations and therefore is used in many parts of the world where cost is an issue. It is no longer available in the United States, however, due to high rates of local reactions such as redness, swelling, and pain at the injection site.
The importance of staying up-to-date with booster shots
Booster vaccination for pertussis in adolescents and adults is critical, since the largest recent outbreaks have occurred in these groups.21 The high rate of outbreaks is presumably the result of waning immunity from childhood immunizations and of high interpersonal contact rates. Vaccination has been shown to reduce the chance of contracting the disease and to reduce the severity and time course of the illness.21
Adolescents and adults are an important reservoir for potentially serious infections in infants who are either unvaccinated or whose vaccination schedule has not been completed. These infants are at risk of severe illness, including pneumonia, seizures, encephalopathy, and apnea, or even death. Adults and teens can also suffer complications from pertussis, although these tend to be less serious, especially in those who have been vaccinated. Complications in teens and adults are often caused by malaise and the cough itself, including weight loss (33%), urinary stress incontinence (28%), syncope (6%), rib fractures from severe coughing (4%), and pneumonia (2%).26 Thus, it is important that adolescents and adults stay up-to-date with pertussis vaccination.
CASE CONTINUED
Our patient was treated with a short (5-day) course of azithromycin 500 mg daily. It did not improve her symptoms very much, but this was not unexpected, given her late presentation and duration of symptoms. Her cough persisted for about 2 months afterwards, but it improved with time and with supportive care at home.
CONTINUED CHALLENGES
Pertussis is a reemerging disease with an increased incidence over the past 30 years, and even more so over the past 10 years. Unfortunately, treatments are not very effective, especially since the disease is often diagnosed late in the course.
We are fortunate to have a vaccine that can prevent pertussis, yet pertussis persists, in large part because of waning immunity from childhood vaccination. The duration of immunity from childhood vaccination is not yet clear. Many adolescents and adults do not follow up on these booster vaccines, thus increasing their susceptibility to pertussis. Consequently, they can transmit the disease to children who are not fully immunized. Prevention by maintaining active immunity is the key to controlling this terrible disease.
- Centers for Disease Control and Prevention. Pertussis. National Immunization Program, 2005. http://www.cdc.gov/vaccines/pubs/pinkbook/downloads/pert.pdf. Accessed July 6, 2011.
- California Department of Public Health. Pertussis report. www.cdph.ca.gov/programs/immunize/Documents/PertussisReport2011-01-07.pdf. Accessed July 6, 2011.
- Centers for Disease Control and Prevention. Pertussis (whooping cough). www.cdc.gov/pertussis/outbreaks.html. Accessed July 3, 2011.
- Centers for Disease Control and Prevention. Notifiable diseases and mortality tables. MMWR Morb Mortal Wkly Rep 2010; 59:847–861. http://www.cdc.gov/mmwr/PDF/wk/mm5927.pdf. Accessed July 1, 2011.
- Centers for Disease Control and Prevention. Pertussis. Vaccines and preventable diseases: pertussis (whooping cough) vaccination, 2010. http://www.cdc.gov/vaccines/vpd-vac/pertussis/default.htm. Accessed July 6, 2011.
- Hewlett EL, Edwards KM. Clinical practice. Pertussis—not just for kids. N Engl J Med 2005; 352:1215–1222.
- Hewlett E. Bordetella species. In: Mandell GL, Bennett JE, Dolin R, editors. Principles and Practice of Infectious Diseases. 5th ed, Philadelphia, PA: Churchill Livingstone; 2000:2701.
- Viljanen MK, Ruuskanen O, Granberg C, Salmi TT. Serological diagnosis of pertussis: IgM, IgA and IgG antibodies against Bordetella pertussis measured by enzyme-linked immunosorbent assay (ELISA). Scand J Infect Dis 1982; 14:117–122.
- Bejuk D, Begovac J, Bace A, Kuzmanovic-Sterk N, Aleraj B. Culture of Bordetella pertussis from three upper respiratory tract specimens. Pediatr Infect Dis J 1995; 14:64–65.
- Hallander HO, Reizenstein E, Renemar B, Rasmuson G, Mardin L, Olin P. Comparison of nasopharyngeal aspirates with swabs for culture of Bordetella pertussis. J Clin Microbiol 1993; 31:50–52.
- Regan J, Lowe F. Enrichment medium for the isolation of Bordetella. J Clin Microbiol 1977; 6:303–309.
- World Health Organization. Laboratory manual for the diagnosis of whooping cough caused by Bordetella pertussis/Bordetella para-pertussis. Department of Immunization, Vaccines and Biologicals. Printed 2004. Revised 2007. www.who.int/vaccines-documents/. Accessed July 6, 2011.
- Meade BD, Bollen A. Recommendations for use of the polymerase chain reaction in the diagnosis of Bordetella pertussis infections. J Med Microbiol 1994; 41:51–55.
- Wendelboe AM, Van Rie A. Diagnosis of pertussis: a historical review and recent developments. Expert Rev Mol Diagn 2006; 6:857–864.
- Knorr L, Fox JD, Tilley PA, Ahmed-Bentley J. Evaluation of real-time PCR for diagnosis of Bordetella pertussis infection. BMC Infect Dis 2006; 6:62.
- Sotir MJ, Cappozzo DL, Warshauer DM, et al. Evaluation of polymerase chain reaction and culture for diagnosis of pertussis in the control of a county-wide outbreak focused among adolescents and adults. Clin Infect Dis 2007; 44:1216–1219.
- Centers for Disease Control and Prevention (CDC). Outbreaks of respiratory illness mistakenly attributed to pertussis—New Hampshire, Massachusetts, and Tennessee, 2004–2006. MMWR Morb Mortal Wkly Rep 2007; 56:837–842.
- Ewanowich CA, Chui LW, Paranchych MG, Peppler MS, Marusyk RG, Albritton WL. Major outbreak of pertussis in northern Alberta, Canada: analysis of discrepant direct fluorescent-antibody and culture results by using polymerase chain reaction methodology. J Clin Microbiol 1993; 31:1715–1725.
- Müller FM, Hoppe JE, Wirsing von König CH. Laboratory diagnosis of pertussis: state of the art in 1997. J Clin Microbiol 1997; 35:2435–2443.
- Tiwari T, Murphy TV, Moran J; National Immunization Program, CDC. Recommended antimicrobial agents for the treatment and postexposure prophylaxis of pertussis: 2005 CDC Guidelines. MMWR Recomm Rep 2005; 54:1–16.
- Wirsing von König CH, Postels-Multani S, Bock HL, Schmitt HJ. Pertussis in adults: frequency of transmission after household exposure. Lancet 1995; 346:1326–1329.
- von König CH. Use of antibiotics in the prevention and treatment of pertussis. Pediatr Infect Dis J 2005; 24(suppl 5):S66–S68.
- Trollfors B. Effect of erythromycin and amoxycillin on Bordetella pertussis in the nasopharynx. Infection 1978; 6:228–230.
- Broder KR, Cortese MM, Iskander JK, et al; Advisory Committee on Immunization Practices (ACIP). Preventing tetanus, diphtheria, and pertussis among adolescents: use of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccines recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep 2006; 55:1–34.
- Centers for Disease Control and Prevention. Recommendations and Guidelines. ACIP presentation slides: October 2010 meeting. http://www.cdc.gov/vaccines/recs/acip/slides-oct10.htm. Accessed July 6, 2011.
- Cortese MM, Bisgard KM. Pertussis. In:Wallace RB, Kohatsu N, Last JM, editors. Wallace/Maxcy-Rosenau-Last Public Health & Preventive Medicine. 15th ed. New York, NY: McGraw-Hill Medical, 2008:111–114.
- Centers for Disease Control and Prevention. Pertussis. National Immunization Program, 2005. http://www.cdc.gov/vaccines/pubs/pinkbook/downloads/pert.pdf. Accessed July 6, 2011.
- California Department of Public Health. Pertussis report. www.cdph.ca.gov/programs/immunize/Documents/PertussisReport2011-01-07.pdf. Accessed July 6, 2011.
- Centers for Disease Control and Prevention. Pertussis (whooping cough). www.cdc.gov/pertussis/outbreaks.html. Accessed July 3, 2011.
- Centers for Disease Control and Prevention. Notifiable diseases and mortality tables. MMWR Morb Mortal Wkly Rep 2010; 59:847–861. http://www.cdc.gov/mmwr/PDF/wk/mm5927.pdf. Accessed July 1, 2011.
- Centers for Disease Control and Prevention. Pertussis. Vaccines and preventable diseases: pertussis (whooping cough) vaccination, 2010. http://www.cdc.gov/vaccines/vpd-vac/pertussis/default.htm. Accessed July 6, 2011.
- Hewlett EL, Edwards KM. Clinical practice. Pertussis—not just for kids. N Engl J Med 2005; 352:1215–1222.
- Hewlett E. Bordetella species. In: Mandell GL, Bennett JE, Dolin R, editors. Principles and Practice of Infectious Diseases. 5th ed, Philadelphia, PA: Churchill Livingstone; 2000:2701.
- Viljanen MK, Ruuskanen O, Granberg C, Salmi TT. Serological diagnosis of pertussis: IgM, IgA and IgG antibodies against Bordetella pertussis measured by enzyme-linked immunosorbent assay (ELISA). Scand J Infect Dis 1982; 14:117–122.
- Bejuk D, Begovac J, Bace A, Kuzmanovic-Sterk N, Aleraj B. Culture of Bordetella pertussis from three upper respiratory tract specimens. Pediatr Infect Dis J 1995; 14:64–65.
- Hallander HO, Reizenstein E, Renemar B, Rasmuson G, Mardin L, Olin P. Comparison of nasopharyngeal aspirates with swabs for culture of Bordetella pertussis. J Clin Microbiol 1993; 31:50–52.
- Regan J, Lowe F. Enrichment medium for the isolation of Bordetella. J Clin Microbiol 1977; 6:303–309.
- World Health Organization. Laboratory manual for the diagnosis of whooping cough caused by Bordetella pertussis/Bordetella para-pertussis. Department of Immunization, Vaccines and Biologicals. Printed 2004. Revised 2007. www.who.int/vaccines-documents/. Accessed July 6, 2011.
- Meade BD, Bollen A. Recommendations for use of the polymerase chain reaction in the diagnosis of Bordetella pertussis infections. J Med Microbiol 1994; 41:51–55.
- Wendelboe AM, Van Rie A. Diagnosis of pertussis: a historical review and recent developments. Expert Rev Mol Diagn 2006; 6:857–864.
- Knorr L, Fox JD, Tilley PA, Ahmed-Bentley J. Evaluation of real-time PCR for diagnosis of Bordetella pertussis infection. BMC Infect Dis 2006; 6:62.
- Sotir MJ, Cappozzo DL, Warshauer DM, et al. Evaluation of polymerase chain reaction and culture for diagnosis of pertussis in the control of a county-wide outbreak focused among adolescents and adults. Clin Infect Dis 2007; 44:1216–1219.
- Centers for Disease Control and Prevention (CDC). Outbreaks of respiratory illness mistakenly attributed to pertussis—New Hampshire, Massachusetts, and Tennessee, 2004–2006. MMWR Morb Mortal Wkly Rep 2007; 56:837–842.
- Ewanowich CA, Chui LW, Paranchych MG, Peppler MS, Marusyk RG, Albritton WL. Major outbreak of pertussis in northern Alberta, Canada: analysis of discrepant direct fluorescent-antibody and culture results by using polymerase chain reaction methodology. J Clin Microbiol 1993; 31:1715–1725.
- Müller FM, Hoppe JE, Wirsing von König CH. Laboratory diagnosis of pertussis: state of the art in 1997. J Clin Microbiol 1997; 35:2435–2443.
- Tiwari T, Murphy TV, Moran J; National Immunization Program, CDC. Recommended antimicrobial agents for the treatment and postexposure prophylaxis of pertussis: 2005 CDC Guidelines. MMWR Recomm Rep 2005; 54:1–16.
- Wirsing von König CH, Postels-Multani S, Bock HL, Schmitt HJ. Pertussis in adults: frequency of transmission after household exposure. Lancet 1995; 346:1326–1329.
- von König CH. Use of antibiotics in the prevention and treatment of pertussis. Pediatr Infect Dis J 2005; 24(suppl 5):S66–S68.
- Trollfors B. Effect of erythromycin and amoxycillin on Bordetella pertussis in the nasopharynx. Infection 1978; 6:228–230.
- Broder KR, Cortese MM, Iskander JK, et al; Advisory Committee on Immunization Practices (ACIP). Preventing tetanus, diphtheria, and pertussis among adolescents: use of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccines recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep 2006; 55:1–34.
- Centers for Disease Control and Prevention. Recommendations and Guidelines. ACIP presentation slides: October 2010 meeting. http://www.cdc.gov/vaccines/recs/acip/slides-oct10.htm. Accessed July 6, 2011.
- Cortese MM, Bisgard KM. Pertussis. In:Wallace RB, Kohatsu N, Last JM, editors. Wallace/Maxcy-Rosenau-Last Public Health & Preventive Medicine. 15th ed. New York, NY: McGraw-Hill Medical, 2008:111–114.
New fecal occult blood tests may improve adherence and mortality rates
New fecal occult blood tests hold promise for improving our detection of colorectal cancer and for lowering mortality rates. This is good news, because despite the proven benefit of being screened for colorectal cancer,1 only an average of 62% of eligible adults are screened,2 and colorectal cancer remains the third leading cause of cancer deaths in the United States.
Colonoscopy is often considered the gold-standard screening test for colorectal cancer. However, many patients do not undergo screening colonoscopy because it is invasive and uncomfortable, bowel preparation poses a challenge, the procedure has risks, and it is costly. Members of minority groups, people of lower socioeconomic status, and those who lack health insurance are less likely to undergo screening.
While fecal occult blood tests are cheaper and less invasive than colonoscopy, they do not allow us to prevent colorectal cancer by removing adenomatous polyps. Still, randomized controlled trials have proven that fecal occult blood testing is associated with a decrease in the rate of death from colorectal cancer,3 and it has been shown to be cost-effective.
The challenge is that all guaiac-based tests (gFOBTs), even the newest one, require strict dietary and medication restrictions to be accurate, and the difficulty of collecting stool specimens often results in either false-positive results or failure to complete the test.
The newer tests—one guaiac-based test and several fecal immunochemical tests (FITs)—are more sensitive, and the FITs are more convenient for patients to use than the older guaiac-based tests, advantages that, we hope, will increase the rates of compliance with testing.
GUAIAC-BASED TESTS
Guaiac tests detect the peroxidase activity of hemoglobin. If hemoglobin is present in stool, it catalyzes the oxidation of the active compound in guaiac paper when a hydrogen peroxide developer is added. The resultant conjugated compound is blue.
The lower-sensitivity guaiac tests are commercially available as Hemoccult and Hemoccult II, and the higher-sensitivity guaiac test is Hemoccult Sensa, which has a lower threshold for detecting peroxidase. All are made by Beckman Coulter, Fullerton, CA.
Disadvantages of guaiac tests. Guaiac tests can give false-positive results by detecting pseudoperoxidases in fruits, vegetables, and nonhuman blood. In addition, they can give false-negative results in people who take excessive amounts of vitamin C, which can inhibit peroxidase activity. Therefore, patients need to follow certain dietary restrictions before testing.
Another disadvantage of guaiac tests is that they cannot differentiate between blood lost from the stomach, small bowel, or colon.
Moreover, the interpretation of guaiac tests is subject to observer variation.
Since testing involves dietary restrictions and obtaining two specimens each from three separate stools, patient compliance is poor.
Patient instructions. Patients undergoing guaiac-based fecal occult blood testing should not take nonsteroidal anti-inflammatory drugs (eg, > one adult aspirin per day) for 7 days before and during the stool collection period to avoid causing gastrointestinal bleeding. They should also not eat red meat or take vitamin C in excess of 250 mg/day for 3 days before testing and throughout the test period.
Two specimens are collected from three different stools with a wooden stick and are smeared onto the stool test card, which is then closed and returned to the physician’s office. The specimens must be collected before the stool comes into contact with the toilet water.
Efficacy of guaiac testing
Randomized, controlled trials of guaiac-based fecal occult blood testing have shown a decrease in colorectal cancer incidence.8–11
A Cochrane review12 involved more than 320,000 people in Denmark, Sweden, the United States, and the United Kingdom who underwent testing every year or every 2 years with Hemoccult or Hemoccult II. The primary analysis was by intention to treat, and it showed that participants allocated to screening had a 16% reduction in the relative risk of death from colorectal cancer, or 0.1 to 0.2 fewer colorectal cancer deaths per 1,000 patient-years. The secondary analysis was adjusted for whether the participants actually were screened; the risk reduction in death from colorectal cancer was 25% in participants who attended at least one round of screening.
FECAL IMMUNOCHEMICAL TESTS
Fecal immunochemical tests use monoclonal or polyclonal antibodies to human globin to detect human blood in stool.
Advantages of fecal immunochemical testing. The antibodies used do not cross-react with nonhuman globin or peroxidases from food sources. Therefore, these tests avoid the dietary and medication restrictions required for guaiac tests. In addition, the stool collection method is simpler, and only one stool specimen is needed instead of three. For these reasons, patient compliance may be better than with guaiac tests.
Additionally, because human globin does not survive passage through the upper gastrointestinal tract, fecal immunochemical testing is specific for bleeding from the colon and rectum.
Immunochemical tests can be read either visually or by machine. Automation allows the threshold for detection of globin to be modified to balance the test’s sensitivity and specificity for the population being served. Most studies have used a threshold of 75 ng/mL, but other studies have assessed thresholds as low as 50 ng/mL and as high as 100 ng/mL. A lower threshold of detection has been shown to increase the sensitivity and yet retain a high specificity.
The immunochemical tests are slightly more expensive than the guaiac tests. However, they are covered by insurance, including Medicare.
Disadvantages of fecal immunochemical testing. A number of tests are available; they use different antibodies and therefore differ in their sensitivity. While most screening studies used automated interpretation of the tests, some studies used visual interpretation (but trained technicians were used to decrease potential interobserver variability). Therefore, the characteristics of fecal immunochemical tests are particular to the specific test kit used.
The antibodies and their epitopes used in some fecal immunochemical tests may be unstable, so that these tests may perform poorly without refrigeration in warm climates or if there are postal delays.
Patient instructions. In some of the tests, a special wand is inserted into six different places in the stool (before the stool is in contact with toilet bowl water) and then placed in the plastic container provided. Other tests use a brush for sample collection. The container may be sent to the laboratory for automated interpretation, or, if the interpretation is performed manually, the container is shaken and a few drops of the liquid in the specimen are added to the test cassette. The interpretation is made after 5 to 10 minutes.
GUAIAC VS IMMUNOCHEMICAL TESTING IN SCREENING
Allison et al13 performed one of the earliest studies to compare the different types of fecal occult blood tests as screening tests for colorectal cancer. More than 7,500 participants in the United States who were due for screening were advised to follow the dietary restrictions for guaiac tests mentioned above for 3 to 4 days before screening and were given three specially made test cards, each of which contained three tests: Hemoccult II, Hemoccult Sensa, and the fecal immunochemical test HemeSelect (SmithKline Diagnostics, San Jose, CA), which was visually read. The authors evaluated the performance of the tests by identifying screened patients found to have colorectal cancer or an adenoma larger than 10 mm in the 2 years after screening.
Sensitivities for detecting colorectal cancer:
- 37% with Hemoccult II
- 69% with HemeSelect
- 79% with Hemoccult Sensa.
Specificities:
- 98% with Hemoccult II
- 94% with HemeSelect
- 87% with Hemoccult Sensa.
Smith et al14 evaluated the performance of two tests in a mix of a screening population and a high-risk group. More than 2,300 Australians sampled two consecutive stools for an immunochemical test, InSure (Enterix, North Ryde, NSW, Australia), and three consecutive stools for Hemoccult Sensa. They were advised to adhere to the dietary and medication restrictions listed in Beckman Coulter’s instructions for the Hemoccult Sensa test. Both tests were read visually. The sensitivity and specificity were calculated from results of colonoscopy performed in participants with a positive stool test.
InSure had a higher sensitivity than Hemoccult Sensa for colorectal cancer (87.5% vs 54.2%) and for advanced adenomas (42.6% vs 23.0%). The false-positive rate for any neoplasia was slightly higher with InSure than with Hemoccult Sensa (3.4% vs 2.5%).
Guittet et al,15 in a French study in more than 10,000 people at average risk, compared a low-sensitivity guaiac test (Hemoccult II) and an immunochemical test, Immudia/RPHA (Fujirebio, Tokyo, Japan). No dietary restrictions were required. Three stool samples were taken for the Hemoccult II and three for the immunochemical test, which was read by machine with three different thresholds for detection of globin: 20, 50, and 75 ng/mL. Positive results were followed up with colonoscopy.
The immunochemical test had a higher sensitivity for both colorectal cancer and advanced adenomas, regardless of the cutoff values of globin. At a cutoff value of 75 ng/mL, the positivity rate was similar to that of the low-sensitivity guaiac test (2.4%), and the immunochemical test offered a gain in sensitivity of 90% and a decrease in the false-positive rate of 33% for advanced neoplasia.
van Rossum et al16 performed a randomized comparison of more than 10,993 tests of Hemoccult II and the fecal immunochemical test OC-Sensor (Eiken Chemical Co., Ltd, Tokyo, Japan) in a screening population in the Netherlands. The participants were not required to follow dietary or medication restrictions. They were asked to send in cards with two samples each from three consecutive bowel movements for the Hemoccult II test and a single sample for the OC-Sensor test, for which interpretation was automated and a cutoff of 100 ng/mL or higher was considered as positive. All participants who had a positive Hemoccult II test or a positive OC-Sensor test with a globin cutoff of 50 ng/mL were advised to undergo colonoscopy.
The study found a 13% higher rate of screening participation with the immunochemical test than with the guaiac-based test, and the positivity rate was 3% higher in the immunochemical testing group (5.5%).16 Cancer was found in 11 patients with the guaiac test and in 24 patients with the immunochemical test; advanced adenomas were found in 48 patients with the guaiac test and 121 patients with the immunochemical test. The guaiac test was more specific, but the participation and detection rates for advanced adenomas and cancer were significantly higher with immunochemical testing.
Park et al17 performed a study in nearly 800 patients undergoing screening colonoscopy in South Korea. Three stool samples were collected for a low-sensitivity guaiac test (Hemoccult II) and for a fecal immunochemical test (OC-Sensor) for detecting cancer and advanced neoplasms. No dietary changes were required. At all globin thresholds between 50 and 150 ng/mL, the immunochemical test was more sensitive than the guaiac-based test, with a similar specificity.
Hundt et al18 obtained a single stool specimen from each of 1,319 German patients before they underwent scheduled screening colonoscopy. Each specimen was tested with six automated immunochemical tests with globin detection thresholds set at 10 to 50 ng/mL. In addition, participants prepared a single Hemoccult card from the same stool sample at home. They were not told to follow any dietary restrictions.
For Hemoccult, the sensitivity for advanced adenoma (1 cm or more in diameter, villous changes, or high-grade dysplasia) was 9%, and the specificity was 96%. For the immunochemical tests, the sensitivity for advanced adenoma varied from 25% to 72%, and the specificity from 70% to 97%.
The reason for the variation in performance of different fecal immunochemical tests is not clear. In some of these tests, the sensitivity can be adjusted when automated interpretation is used. It has been shown that different thresholds for the detection of globin partially explain this. Differences in collection methods also affect the result.
Itoh et al19 reported the results of a screening study done at a large Japanese corporation using a fecal immunochemical test, OC-Hemodia (Eiken Chemical Co., LTD, Tokyo, Japan). A small sample of a single stool was placed in buffer and read by machine. At a cutoff of 200 μg/mL, the sensitivity was 77.5% and the specificity was 98.9%. At a cutoff of 50 ng/mL, the sensitivity was 86.5% and the specificity was 94.9%. In this study, positive tests were followed by colonoscopy, but false-negative tests were identified from insurance claims.
Cole et al20 assessed the rates of participation in colorectal cancer screening in a study in Australia. Participants were randomized and received by mail either Hemoccult Sensa or one of two fecal immunochemical tests, Flex-Sure OBT (Beckman Coulter, Fullerton, CA) or InSure. The Hemoccult Sensa group was instructed to follow dietary and medication restrictions during stool collection, while the immunochemical test groups were not. Three stool specimens were required for the Hemoccult Sensa and FlexSure tests, while two stools were required for InSure.
The participation rate was 23.4% in the Hemoccult Sensa group, 30.5% in the Flex-Sure OBT group, and 39.6% in the InSure group (P < .001).
Hol et al21 found that the participation rate was 50% in a group asked to undergo guaiac testing requiring three samples without diet restriction and 62% in a group asked to undergo fecal immunochemical testing (OC-Sensor) requiring a single stool sample without restrictions. Higher participation rates are seen with fecal immunochemical testing than with guaiac testing and are an advantage of immunochemical testing.
CLEVELAND CLINIC SWITCHES TO FECAL IMMUNOCHEMICAL TESTING FOR COLORECTAL CANCER SCREENING
Cleveland Clinic recently switched to fecal immunochemical testing in place of Hemoccult Sensa for colorectal cancer screening. The data on fecal occult blood tests show that the sensitivities of Hemoccult Sensa and the immunochemical tests are higher than those of Hemoccult and Hemoccult II for the detection of colorectal cancer and advanced adenomas, with similar specificity. Fecal immunochemical tests have an advantage over guaiac-based tests in most screening studies by showing a superior sensitivity for advanced adenomas and colorectal cancer, as well as an increase in test adherence, likely because of the lack of dietary and medication restrictions and the lower number of stool samples required. Increased compliance should improve participation in colorectal cancer screening and positively affect colorectal cancer mortality rates.
- Edwards BK, Ward E, Kohler BA, et al. Annual report to the nation on the status of cancer, 1975–2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates. Cancer 2010; 116:544–573.
- Centers for Disease Control and Prevention (CDC). Vital signs: colorectal cancer screening among adults aged 50–75 years—United States, 2008. MMWR Morb Mortal Wkly Rep 2010; 59:808–812.
- Mandel JS, Church TR, Bond JH, et al. The effect of fecal occult-blood screening on the incidence of colorectal cancer. N Engl J Med 2000; 343:1603–1607.
- Levin B, Lieberman DA, McFarland B, et al; American Cancer Society Colorectal Cancer Advisory Group; US Multi-Society Task Force; American College of Radiology Colon Cancer Committee. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. Gastroenterology 2008; 134:1570–1595.
- US Preventive Services Task Force. Screening for colorectal cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2008; 149:627–637.
- Rex DK, Johnson DA, Anderson JC, Schoenfeld PS, Burke CA, Inadomi JM; American College of Gastroenterology. American College of Gastroenterology guidelines for colorectal cancer screening 2009 [corrected]. Am J Gastroenterol 2009; 104:739–750.
- Vu HT, Burke CA. Advances in colorectal cancer screening. Curr Gastroenterol Rep 2009; 11:406–412.
- Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomised controlled trial of faecal-occult-blood screening for colorectal cancer. Lancet 1996; 348:1472–1477.
- Kronborg O, Fenger C, Olsen J, Jørgensen OD, Søndergaard O. Randomised study of screening for colorectal cancer with faecal-occult-blood test. Lancet 1996; 348:1467–1471.
- Mandel JS, Bond JH, Church TR, et al. Reducing mortality from colorectal cancer by screening for fecal occult blood. Minnesota Colon Cancer Control Study. N Engl J Med 1993; 328:1365–1371.
- Kewenter J, Brevinge H, Engarås B, Haglind E, Ahrén C. Results of screening, rescreening, and follow-up in a prospective randomized study for detection of colorectal cancer by fecal occult blood testing. Results for 68,308 subjects. Scand J Gastroenterol 1994; 29:468–473.
- Hewitson P, Glasziou P, Irwig L, Towler B, Watson E. Screening for colorectal cancer using the faecal occult blood test, Hemoccult. Cochrane Database Syst Rev 2007;CD001216.
- Allison JE, Tekawa IS, Ransom LJ, Adrain AL. A comparison of fecal occult-blood tests for colorectal-cancer screening. N Engl J Med 1996; 334:155–159.
- Smith A, Young GP, Cole SR, Bampton P. Comparison of a brush-sampling fecal immunochemical test for hemoglobin with a sensitive guaiac-based fecal occult blood test in detection of colorectal neoplasia. Cancer 2006; 107:2152–2159.
- Guittet L, Bouvier V, Mariotte N, et al. Comparison of a guaiac based and an immunochemical faecal occult blood test in screening for colorectal cancer in a general average risk population. Gut 2007; 56:210–214.
- van Rossum LG, van Rijn AF, Laheij RJ, et al. Random comparison of guaiac and immunochemical fecal occult blood tests for colorectal cancer in a screening population. Gastroenterology 2008; 135:82–90.
- Park DI, Ryu S, Kim YH, et al. Comparison of guaiac-based and quantitative immunochemical fecal occult blood testing in a population at average risk undergoing colorectal cancer screening. Am J Gastroenterol 2010; 105:2017–2025.
- Hundt S, Haug U, Brenner H. Comparative evaluation of immunochemical fecal occult blood tests for colorectal adenoma detection. Ann Intern Med 2009; 150:162–169.
- Itoh M, Takahashi K, Nishida H, Sakagami K, Okubo T. Estimation of the optimal cut off point in a new immunological faecal occult blood test in a corporate colorectal cancer screening programme. J Med Screen 1996; 3:66–71.
- Cole SR, Young GP, Esterman A, Cadd B, Morcom J. A randomised trial of the impact of new faecal haemoglobin test technologies on population participation in screening for colorectal cancer. J Med Screen 2003; 10:117–122.
- Hol L, Wilschut JA, van Ballegooijen M, et al. Screening for colorectal cancer: random comparison of guaiac and immunochemical faecal occult blood testing at different cut-off levels. Br J Cancer 2009; 100:1103–1110.
New fecal occult blood tests hold promise for improving our detection of colorectal cancer and for lowering mortality rates. This is good news, because despite the proven benefit of being screened for colorectal cancer,1 only an average of 62% of eligible adults are screened,2 and colorectal cancer remains the third leading cause of cancer deaths in the United States.
Colonoscopy is often considered the gold-standard screening test for colorectal cancer. However, many patients do not undergo screening colonoscopy because it is invasive and uncomfortable, bowel preparation poses a challenge, the procedure has risks, and it is costly. Members of minority groups, people of lower socioeconomic status, and those who lack health insurance are less likely to undergo screening.
While fecal occult blood tests are cheaper and less invasive than colonoscopy, they do not allow us to prevent colorectal cancer by removing adenomatous polyps. Still, randomized controlled trials have proven that fecal occult blood testing is associated with a decrease in the rate of death from colorectal cancer,3 and it has been shown to be cost-effective.
The challenge is that all guaiac-based tests (gFOBTs), even the newest one, require strict dietary and medication restrictions to be accurate, and the difficulty of collecting stool specimens often results in either false-positive results or failure to complete the test.
The newer tests—one guaiac-based test and several fecal immunochemical tests (FITs)—are more sensitive, and the FITs are more convenient for patients to use than the older guaiac-based tests, advantages that, we hope, will increase the rates of compliance with testing.
GUAIAC-BASED TESTS
Guaiac tests detect the peroxidase activity of hemoglobin. If hemoglobin is present in stool, it catalyzes the oxidation of the active compound in guaiac paper when a hydrogen peroxide developer is added. The resultant conjugated compound is blue.
The lower-sensitivity guaiac tests are commercially available as Hemoccult and Hemoccult II, and the higher-sensitivity guaiac test is Hemoccult Sensa, which has a lower threshold for detecting peroxidase. All are made by Beckman Coulter, Fullerton, CA.
Disadvantages of guaiac tests. Guaiac tests can give false-positive results by detecting pseudoperoxidases in fruits, vegetables, and nonhuman blood. In addition, they can give false-negative results in people who take excessive amounts of vitamin C, which can inhibit peroxidase activity. Therefore, patients need to follow certain dietary restrictions before testing.
Another disadvantage of guaiac tests is that they cannot differentiate between blood lost from the stomach, small bowel, or colon.
Moreover, the interpretation of guaiac tests is subject to observer variation.
Since testing involves dietary restrictions and obtaining two specimens each from three separate stools, patient compliance is poor.
Patient instructions. Patients undergoing guaiac-based fecal occult blood testing should not take nonsteroidal anti-inflammatory drugs (eg, > one adult aspirin per day) for 7 days before and during the stool collection period to avoid causing gastrointestinal bleeding. They should also not eat red meat or take vitamin C in excess of 250 mg/day for 3 days before testing and throughout the test period.
Two specimens are collected from three different stools with a wooden stick and are smeared onto the stool test card, which is then closed and returned to the physician’s office. The specimens must be collected before the stool comes into contact with the toilet water.
Efficacy of guaiac testing
Randomized, controlled trials of guaiac-based fecal occult blood testing have shown a decrease in colorectal cancer incidence.8–11
A Cochrane review12 involved more than 320,000 people in Denmark, Sweden, the United States, and the United Kingdom who underwent testing every year or every 2 years with Hemoccult or Hemoccult II. The primary analysis was by intention to treat, and it showed that participants allocated to screening had a 16% reduction in the relative risk of death from colorectal cancer, or 0.1 to 0.2 fewer colorectal cancer deaths per 1,000 patient-years. The secondary analysis was adjusted for whether the participants actually were screened; the risk reduction in death from colorectal cancer was 25% in participants who attended at least one round of screening.
FECAL IMMUNOCHEMICAL TESTS
Fecal immunochemical tests use monoclonal or polyclonal antibodies to human globin to detect human blood in stool.
Advantages of fecal immunochemical testing. The antibodies used do not cross-react with nonhuman globin or peroxidases from food sources. Therefore, these tests avoid the dietary and medication restrictions required for guaiac tests. In addition, the stool collection method is simpler, and only one stool specimen is needed instead of three. For these reasons, patient compliance may be better than with guaiac tests.
Additionally, because human globin does not survive passage through the upper gastrointestinal tract, fecal immunochemical testing is specific for bleeding from the colon and rectum.
Immunochemical tests can be read either visually or by machine. Automation allows the threshold for detection of globin to be modified to balance the test’s sensitivity and specificity for the population being served. Most studies have used a threshold of 75 ng/mL, but other studies have assessed thresholds as low as 50 ng/mL and as high as 100 ng/mL. A lower threshold of detection has been shown to increase the sensitivity and yet retain a high specificity.
The immunochemical tests are slightly more expensive than the guaiac tests. However, they are covered by insurance, including Medicare.
Disadvantages of fecal immunochemical testing. A number of tests are available; they use different antibodies and therefore differ in their sensitivity. While most screening studies used automated interpretation of the tests, some studies used visual interpretation (but trained technicians were used to decrease potential interobserver variability). Therefore, the characteristics of fecal immunochemical tests are particular to the specific test kit used.
The antibodies and their epitopes used in some fecal immunochemical tests may be unstable, so that these tests may perform poorly without refrigeration in warm climates or if there are postal delays.
Patient instructions. In some of the tests, a special wand is inserted into six different places in the stool (before the stool is in contact with toilet bowl water) and then placed in the plastic container provided. Other tests use a brush for sample collection. The container may be sent to the laboratory for automated interpretation, or, if the interpretation is performed manually, the container is shaken and a few drops of the liquid in the specimen are added to the test cassette. The interpretation is made after 5 to 10 minutes.
GUAIAC VS IMMUNOCHEMICAL TESTING IN SCREENING
Allison et al13 performed one of the earliest studies to compare the different types of fecal occult blood tests as screening tests for colorectal cancer. More than 7,500 participants in the United States who were due for screening were advised to follow the dietary restrictions for guaiac tests mentioned above for 3 to 4 days before screening and were given three specially made test cards, each of which contained three tests: Hemoccult II, Hemoccult Sensa, and the fecal immunochemical test HemeSelect (SmithKline Diagnostics, San Jose, CA), which was visually read. The authors evaluated the performance of the tests by identifying screened patients found to have colorectal cancer or an adenoma larger than 10 mm in the 2 years after screening.
Sensitivities for detecting colorectal cancer:
- 37% with Hemoccult II
- 69% with HemeSelect
- 79% with Hemoccult Sensa.
Specificities:
- 98% with Hemoccult II
- 94% with HemeSelect
- 87% with Hemoccult Sensa.
Smith et al14 evaluated the performance of two tests in a mix of a screening population and a high-risk group. More than 2,300 Australians sampled two consecutive stools for an immunochemical test, InSure (Enterix, North Ryde, NSW, Australia), and three consecutive stools for Hemoccult Sensa. They were advised to adhere to the dietary and medication restrictions listed in Beckman Coulter’s instructions for the Hemoccult Sensa test. Both tests were read visually. The sensitivity and specificity were calculated from results of colonoscopy performed in participants with a positive stool test.
InSure had a higher sensitivity than Hemoccult Sensa for colorectal cancer (87.5% vs 54.2%) and for advanced adenomas (42.6% vs 23.0%). The false-positive rate for any neoplasia was slightly higher with InSure than with Hemoccult Sensa (3.4% vs 2.5%).
Guittet et al,15 in a French study in more than 10,000 people at average risk, compared a low-sensitivity guaiac test (Hemoccult II) and an immunochemical test, Immudia/RPHA (Fujirebio, Tokyo, Japan). No dietary restrictions were required. Three stool samples were taken for the Hemoccult II and three for the immunochemical test, which was read by machine with three different thresholds for detection of globin: 20, 50, and 75 ng/mL. Positive results were followed up with colonoscopy.
The immunochemical test had a higher sensitivity for both colorectal cancer and advanced adenomas, regardless of the cutoff values of globin. At a cutoff value of 75 ng/mL, the positivity rate was similar to that of the low-sensitivity guaiac test (2.4%), and the immunochemical test offered a gain in sensitivity of 90% and a decrease in the false-positive rate of 33% for advanced neoplasia.
van Rossum et al16 performed a randomized comparison of more than 10,993 tests of Hemoccult II and the fecal immunochemical test OC-Sensor (Eiken Chemical Co., Ltd, Tokyo, Japan) in a screening population in the Netherlands. The participants were not required to follow dietary or medication restrictions. They were asked to send in cards with two samples each from three consecutive bowel movements for the Hemoccult II test and a single sample for the OC-Sensor test, for which interpretation was automated and a cutoff of 100 ng/mL or higher was considered as positive. All participants who had a positive Hemoccult II test or a positive OC-Sensor test with a globin cutoff of 50 ng/mL were advised to undergo colonoscopy.
The study found a 13% higher rate of screening participation with the immunochemical test than with the guaiac-based test, and the positivity rate was 3% higher in the immunochemical testing group (5.5%).16 Cancer was found in 11 patients with the guaiac test and in 24 patients with the immunochemical test; advanced adenomas were found in 48 patients with the guaiac test and 121 patients with the immunochemical test. The guaiac test was more specific, but the participation and detection rates for advanced adenomas and cancer were significantly higher with immunochemical testing.
Park et al17 performed a study in nearly 800 patients undergoing screening colonoscopy in South Korea. Three stool samples were collected for a low-sensitivity guaiac test (Hemoccult II) and for a fecal immunochemical test (OC-Sensor) for detecting cancer and advanced neoplasms. No dietary changes were required. At all globin thresholds between 50 and 150 ng/mL, the immunochemical test was more sensitive than the guaiac-based test, with a similar specificity.
Hundt et al18 obtained a single stool specimen from each of 1,319 German patients before they underwent scheduled screening colonoscopy. Each specimen was tested with six automated immunochemical tests with globin detection thresholds set at 10 to 50 ng/mL. In addition, participants prepared a single Hemoccult card from the same stool sample at home. They were not told to follow any dietary restrictions.
For Hemoccult, the sensitivity for advanced adenoma (1 cm or more in diameter, villous changes, or high-grade dysplasia) was 9%, and the specificity was 96%. For the immunochemical tests, the sensitivity for advanced adenoma varied from 25% to 72%, and the specificity from 70% to 97%.
The reason for the variation in performance of different fecal immunochemical tests is not clear. In some of these tests, the sensitivity can be adjusted when automated interpretation is used. It has been shown that different thresholds for the detection of globin partially explain this. Differences in collection methods also affect the result.
Itoh et al19 reported the results of a screening study done at a large Japanese corporation using a fecal immunochemical test, OC-Hemodia (Eiken Chemical Co., LTD, Tokyo, Japan). A small sample of a single stool was placed in buffer and read by machine. At a cutoff of 200 μg/mL, the sensitivity was 77.5% and the specificity was 98.9%. At a cutoff of 50 ng/mL, the sensitivity was 86.5% and the specificity was 94.9%. In this study, positive tests were followed by colonoscopy, but false-negative tests were identified from insurance claims.
Cole et al20 assessed the rates of participation in colorectal cancer screening in a study in Australia. Participants were randomized and received by mail either Hemoccult Sensa or one of two fecal immunochemical tests, Flex-Sure OBT (Beckman Coulter, Fullerton, CA) or InSure. The Hemoccult Sensa group was instructed to follow dietary and medication restrictions during stool collection, while the immunochemical test groups were not. Three stool specimens were required for the Hemoccult Sensa and FlexSure tests, while two stools were required for InSure.
The participation rate was 23.4% in the Hemoccult Sensa group, 30.5% in the Flex-Sure OBT group, and 39.6% in the InSure group (P < .001).
Hol et al21 found that the participation rate was 50% in a group asked to undergo guaiac testing requiring three samples without diet restriction and 62% in a group asked to undergo fecal immunochemical testing (OC-Sensor) requiring a single stool sample without restrictions. Higher participation rates are seen with fecal immunochemical testing than with guaiac testing and are an advantage of immunochemical testing.
CLEVELAND CLINIC SWITCHES TO FECAL IMMUNOCHEMICAL TESTING FOR COLORECTAL CANCER SCREENING
Cleveland Clinic recently switched to fecal immunochemical testing in place of Hemoccult Sensa for colorectal cancer screening. The data on fecal occult blood tests show that the sensitivities of Hemoccult Sensa and the immunochemical tests are higher than those of Hemoccult and Hemoccult II for the detection of colorectal cancer and advanced adenomas, with similar specificity. Fecal immunochemical tests have an advantage over guaiac-based tests in most screening studies by showing a superior sensitivity for advanced adenomas and colorectal cancer, as well as an increase in test adherence, likely because of the lack of dietary and medication restrictions and the lower number of stool samples required. Increased compliance should improve participation in colorectal cancer screening and positively affect colorectal cancer mortality rates.
New fecal occult blood tests hold promise for improving our detection of colorectal cancer and for lowering mortality rates. This is good news, because despite the proven benefit of being screened for colorectal cancer,1 only an average of 62% of eligible adults are screened,2 and colorectal cancer remains the third leading cause of cancer deaths in the United States.
Colonoscopy is often considered the gold-standard screening test for colorectal cancer. However, many patients do not undergo screening colonoscopy because it is invasive and uncomfortable, bowel preparation poses a challenge, the procedure has risks, and it is costly. Members of minority groups, people of lower socioeconomic status, and those who lack health insurance are less likely to undergo screening.
While fecal occult blood tests are cheaper and less invasive than colonoscopy, they do not allow us to prevent colorectal cancer by removing adenomatous polyps. Still, randomized controlled trials have proven that fecal occult blood testing is associated with a decrease in the rate of death from colorectal cancer,3 and it has been shown to be cost-effective.
The challenge is that all guaiac-based tests (gFOBTs), even the newest one, require strict dietary and medication restrictions to be accurate, and the difficulty of collecting stool specimens often results in either false-positive results or failure to complete the test.
The newer tests—one guaiac-based test and several fecal immunochemical tests (FITs)—are more sensitive, and the FITs are more convenient for patients to use than the older guaiac-based tests, advantages that, we hope, will increase the rates of compliance with testing.
GUAIAC-BASED TESTS
Guaiac tests detect the peroxidase activity of hemoglobin. If hemoglobin is present in stool, it catalyzes the oxidation of the active compound in guaiac paper when a hydrogen peroxide developer is added. The resultant conjugated compound is blue.
The lower-sensitivity guaiac tests are commercially available as Hemoccult and Hemoccult II, and the higher-sensitivity guaiac test is Hemoccult Sensa, which has a lower threshold for detecting peroxidase. All are made by Beckman Coulter, Fullerton, CA.
Disadvantages of guaiac tests. Guaiac tests can give false-positive results by detecting pseudoperoxidases in fruits, vegetables, and nonhuman blood. In addition, they can give false-negative results in people who take excessive amounts of vitamin C, which can inhibit peroxidase activity. Therefore, patients need to follow certain dietary restrictions before testing.
Another disadvantage of guaiac tests is that they cannot differentiate between blood lost from the stomach, small bowel, or colon.
Moreover, the interpretation of guaiac tests is subject to observer variation.
Since testing involves dietary restrictions and obtaining two specimens each from three separate stools, patient compliance is poor.
Patient instructions. Patients undergoing guaiac-based fecal occult blood testing should not take nonsteroidal anti-inflammatory drugs (eg, > one adult aspirin per day) for 7 days before and during the stool collection period to avoid causing gastrointestinal bleeding. They should also not eat red meat or take vitamin C in excess of 250 mg/day for 3 days before testing and throughout the test period.
Two specimens are collected from three different stools with a wooden stick and are smeared onto the stool test card, which is then closed and returned to the physician’s office. The specimens must be collected before the stool comes into contact with the toilet water.
Efficacy of guaiac testing
Randomized, controlled trials of guaiac-based fecal occult blood testing have shown a decrease in colorectal cancer incidence.8–11
A Cochrane review12 involved more than 320,000 people in Denmark, Sweden, the United States, and the United Kingdom who underwent testing every year or every 2 years with Hemoccult or Hemoccult II. The primary analysis was by intention to treat, and it showed that participants allocated to screening had a 16% reduction in the relative risk of death from colorectal cancer, or 0.1 to 0.2 fewer colorectal cancer deaths per 1,000 patient-years. The secondary analysis was adjusted for whether the participants actually were screened; the risk reduction in death from colorectal cancer was 25% in participants who attended at least one round of screening.
FECAL IMMUNOCHEMICAL TESTS
Fecal immunochemical tests use monoclonal or polyclonal antibodies to human globin to detect human blood in stool.
Advantages of fecal immunochemical testing. The antibodies used do not cross-react with nonhuman globin or peroxidases from food sources. Therefore, these tests avoid the dietary and medication restrictions required for guaiac tests. In addition, the stool collection method is simpler, and only one stool specimen is needed instead of three. For these reasons, patient compliance may be better than with guaiac tests.
Additionally, because human globin does not survive passage through the upper gastrointestinal tract, fecal immunochemical testing is specific for bleeding from the colon and rectum.
Immunochemical tests can be read either visually or by machine. Automation allows the threshold for detection of globin to be modified to balance the test’s sensitivity and specificity for the population being served. Most studies have used a threshold of 75 ng/mL, but other studies have assessed thresholds as low as 50 ng/mL and as high as 100 ng/mL. A lower threshold of detection has been shown to increase the sensitivity and yet retain a high specificity.
The immunochemical tests are slightly more expensive than the guaiac tests. However, they are covered by insurance, including Medicare.
Disadvantages of fecal immunochemical testing. A number of tests are available; they use different antibodies and therefore differ in their sensitivity. While most screening studies used automated interpretation of the tests, some studies used visual interpretation (but trained technicians were used to decrease potential interobserver variability). Therefore, the characteristics of fecal immunochemical tests are particular to the specific test kit used.
The antibodies and their epitopes used in some fecal immunochemical tests may be unstable, so that these tests may perform poorly without refrigeration in warm climates or if there are postal delays.
Patient instructions. In some of the tests, a special wand is inserted into six different places in the stool (before the stool is in contact with toilet bowl water) and then placed in the plastic container provided. Other tests use a brush for sample collection. The container may be sent to the laboratory for automated interpretation, or, if the interpretation is performed manually, the container is shaken and a few drops of the liquid in the specimen are added to the test cassette. The interpretation is made after 5 to 10 minutes.
GUAIAC VS IMMUNOCHEMICAL TESTING IN SCREENING
Allison et al13 performed one of the earliest studies to compare the different types of fecal occult blood tests as screening tests for colorectal cancer. More than 7,500 participants in the United States who were due for screening were advised to follow the dietary restrictions for guaiac tests mentioned above for 3 to 4 days before screening and were given three specially made test cards, each of which contained three tests: Hemoccult II, Hemoccult Sensa, and the fecal immunochemical test HemeSelect (SmithKline Diagnostics, San Jose, CA), which was visually read. The authors evaluated the performance of the tests by identifying screened patients found to have colorectal cancer or an adenoma larger than 10 mm in the 2 years after screening.
Sensitivities for detecting colorectal cancer:
- 37% with Hemoccult II
- 69% with HemeSelect
- 79% with Hemoccult Sensa.
Specificities:
- 98% with Hemoccult II
- 94% with HemeSelect
- 87% with Hemoccult Sensa.
Smith et al14 evaluated the performance of two tests in a mix of a screening population and a high-risk group. More than 2,300 Australians sampled two consecutive stools for an immunochemical test, InSure (Enterix, North Ryde, NSW, Australia), and three consecutive stools for Hemoccult Sensa. They were advised to adhere to the dietary and medication restrictions listed in Beckman Coulter’s instructions for the Hemoccult Sensa test. Both tests were read visually. The sensitivity and specificity were calculated from results of colonoscopy performed in participants with a positive stool test.
InSure had a higher sensitivity than Hemoccult Sensa for colorectal cancer (87.5% vs 54.2%) and for advanced adenomas (42.6% vs 23.0%). The false-positive rate for any neoplasia was slightly higher with InSure than with Hemoccult Sensa (3.4% vs 2.5%).
Guittet et al,15 in a French study in more than 10,000 people at average risk, compared a low-sensitivity guaiac test (Hemoccult II) and an immunochemical test, Immudia/RPHA (Fujirebio, Tokyo, Japan). No dietary restrictions were required. Three stool samples were taken for the Hemoccult II and three for the immunochemical test, which was read by machine with three different thresholds for detection of globin: 20, 50, and 75 ng/mL. Positive results were followed up with colonoscopy.
The immunochemical test had a higher sensitivity for both colorectal cancer and advanced adenomas, regardless of the cutoff values of globin. At a cutoff value of 75 ng/mL, the positivity rate was similar to that of the low-sensitivity guaiac test (2.4%), and the immunochemical test offered a gain in sensitivity of 90% and a decrease in the false-positive rate of 33% for advanced neoplasia.
van Rossum et al16 performed a randomized comparison of more than 10,993 tests of Hemoccult II and the fecal immunochemical test OC-Sensor (Eiken Chemical Co., Ltd, Tokyo, Japan) in a screening population in the Netherlands. The participants were not required to follow dietary or medication restrictions. They were asked to send in cards with two samples each from three consecutive bowel movements for the Hemoccult II test and a single sample for the OC-Sensor test, for which interpretation was automated and a cutoff of 100 ng/mL or higher was considered as positive. All participants who had a positive Hemoccult II test or a positive OC-Sensor test with a globin cutoff of 50 ng/mL were advised to undergo colonoscopy.
The study found a 13% higher rate of screening participation with the immunochemical test than with the guaiac-based test, and the positivity rate was 3% higher in the immunochemical testing group (5.5%).16 Cancer was found in 11 patients with the guaiac test and in 24 patients with the immunochemical test; advanced adenomas were found in 48 patients with the guaiac test and 121 patients with the immunochemical test. The guaiac test was more specific, but the participation and detection rates for advanced adenomas and cancer were significantly higher with immunochemical testing.
Park et al17 performed a study in nearly 800 patients undergoing screening colonoscopy in South Korea. Three stool samples were collected for a low-sensitivity guaiac test (Hemoccult II) and for a fecal immunochemical test (OC-Sensor) for detecting cancer and advanced neoplasms. No dietary changes were required. At all globin thresholds between 50 and 150 ng/mL, the immunochemical test was more sensitive than the guaiac-based test, with a similar specificity.
Hundt et al18 obtained a single stool specimen from each of 1,319 German patients before they underwent scheduled screening colonoscopy. Each specimen was tested with six automated immunochemical tests with globin detection thresholds set at 10 to 50 ng/mL. In addition, participants prepared a single Hemoccult card from the same stool sample at home. They were not told to follow any dietary restrictions.
For Hemoccult, the sensitivity for advanced adenoma (1 cm or more in diameter, villous changes, or high-grade dysplasia) was 9%, and the specificity was 96%. For the immunochemical tests, the sensitivity for advanced adenoma varied from 25% to 72%, and the specificity from 70% to 97%.
The reason for the variation in performance of different fecal immunochemical tests is not clear. In some of these tests, the sensitivity can be adjusted when automated interpretation is used. It has been shown that different thresholds for the detection of globin partially explain this. Differences in collection methods also affect the result.
Itoh et al19 reported the results of a screening study done at a large Japanese corporation using a fecal immunochemical test, OC-Hemodia (Eiken Chemical Co., LTD, Tokyo, Japan). A small sample of a single stool was placed in buffer and read by machine. At a cutoff of 200 μg/mL, the sensitivity was 77.5% and the specificity was 98.9%. At a cutoff of 50 ng/mL, the sensitivity was 86.5% and the specificity was 94.9%. In this study, positive tests were followed by colonoscopy, but false-negative tests were identified from insurance claims.
Cole et al20 assessed the rates of participation in colorectal cancer screening in a study in Australia. Participants were randomized and received by mail either Hemoccult Sensa or one of two fecal immunochemical tests, Flex-Sure OBT (Beckman Coulter, Fullerton, CA) or InSure. The Hemoccult Sensa group was instructed to follow dietary and medication restrictions during stool collection, while the immunochemical test groups were not. Three stool specimens were required for the Hemoccult Sensa and FlexSure tests, while two stools were required for InSure.
The participation rate was 23.4% in the Hemoccult Sensa group, 30.5% in the Flex-Sure OBT group, and 39.6% in the InSure group (P < .001).
Hol et al21 found that the participation rate was 50% in a group asked to undergo guaiac testing requiring three samples without diet restriction and 62% in a group asked to undergo fecal immunochemical testing (OC-Sensor) requiring a single stool sample without restrictions. Higher participation rates are seen with fecal immunochemical testing than with guaiac testing and are an advantage of immunochemical testing.
CLEVELAND CLINIC SWITCHES TO FECAL IMMUNOCHEMICAL TESTING FOR COLORECTAL CANCER SCREENING
Cleveland Clinic recently switched to fecal immunochemical testing in place of Hemoccult Sensa for colorectal cancer screening. The data on fecal occult blood tests show that the sensitivities of Hemoccult Sensa and the immunochemical tests are higher than those of Hemoccult and Hemoccult II for the detection of colorectal cancer and advanced adenomas, with similar specificity. Fecal immunochemical tests have an advantage over guaiac-based tests in most screening studies by showing a superior sensitivity for advanced adenomas and colorectal cancer, as well as an increase in test adherence, likely because of the lack of dietary and medication restrictions and the lower number of stool samples required. Increased compliance should improve participation in colorectal cancer screening and positively affect colorectal cancer mortality rates.
- Edwards BK, Ward E, Kohler BA, et al. Annual report to the nation on the status of cancer, 1975–2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates. Cancer 2010; 116:544–573.
- Centers for Disease Control and Prevention (CDC). Vital signs: colorectal cancer screening among adults aged 50–75 years—United States, 2008. MMWR Morb Mortal Wkly Rep 2010; 59:808–812.
- Mandel JS, Church TR, Bond JH, et al. The effect of fecal occult-blood screening on the incidence of colorectal cancer. N Engl J Med 2000; 343:1603–1607.
- Levin B, Lieberman DA, McFarland B, et al; American Cancer Society Colorectal Cancer Advisory Group; US Multi-Society Task Force; American College of Radiology Colon Cancer Committee. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. Gastroenterology 2008; 134:1570–1595.
- US Preventive Services Task Force. Screening for colorectal cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2008; 149:627–637.
- Rex DK, Johnson DA, Anderson JC, Schoenfeld PS, Burke CA, Inadomi JM; American College of Gastroenterology. American College of Gastroenterology guidelines for colorectal cancer screening 2009 [corrected]. Am J Gastroenterol 2009; 104:739–750.
- Vu HT, Burke CA. Advances in colorectal cancer screening. Curr Gastroenterol Rep 2009; 11:406–412.
- Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomised controlled trial of faecal-occult-blood screening for colorectal cancer. Lancet 1996; 348:1472–1477.
- Kronborg O, Fenger C, Olsen J, Jørgensen OD, Søndergaard O. Randomised study of screening for colorectal cancer with faecal-occult-blood test. Lancet 1996; 348:1467–1471.
- Mandel JS, Bond JH, Church TR, et al. Reducing mortality from colorectal cancer by screening for fecal occult blood. Minnesota Colon Cancer Control Study. N Engl J Med 1993; 328:1365–1371.
- Kewenter J, Brevinge H, Engarås B, Haglind E, Ahrén C. Results of screening, rescreening, and follow-up in a prospective randomized study for detection of colorectal cancer by fecal occult blood testing. Results for 68,308 subjects. Scand J Gastroenterol 1994; 29:468–473.
- Hewitson P, Glasziou P, Irwig L, Towler B, Watson E. Screening for colorectal cancer using the faecal occult blood test, Hemoccult. Cochrane Database Syst Rev 2007;CD001216.
- Allison JE, Tekawa IS, Ransom LJ, Adrain AL. A comparison of fecal occult-blood tests for colorectal-cancer screening. N Engl J Med 1996; 334:155–159.
- Smith A, Young GP, Cole SR, Bampton P. Comparison of a brush-sampling fecal immunochemical test for hemoglobin with a sensitive guaiac-based fecal occult blood test in detection of colorectal neoplasia. Cancer 2006; 107:2152–2159.
- Guittet L, Bouvier V, Mariotte N, et al. Comparison of a guaiac based and an immunochemical faecal occult blood test in screening for colorectal cancer in a general average risk population. Gut 2007; 56:210–214.
- van Rossum LG, van Rijn AF, Laheij RJ, et al. Random comparison of guaiac and immunochemical fecal occult blood tests for colorectal cancer in a screening population. Gastroenterology 2008; 135:82–90.
- Park DI, Ryu S, Kim YH, et al. Comparison of guaiac-based and quantitative immunochemical fecal occult blood testing in a population at average risk undergoing colorectal cancer screening. Am J Gastroenterol 2010; 105:2017–2025.
- Hundt S, Haug U, Brenner H. Comparative evaluation of immunochemical fecal occult blood tests for colorectal adenoma detection. Ann Intern Med 2009; 150:162–169.
- Itoh M, Takahashi K, Nishida H, Sakagami K, Okubo T. Estimation of the optimal cut off point in a new immunological faecal occult blood test in a corporate colorectal cancer screening programme. J Med Screen 1996; 3:66–71.
- Cole SR, Young GP, Esterman A, Cadd B, Morcom J. A randomised trial of the impact of new faecal haemoglobin test technologies on population participation in screening for colorectal cancer. J Med Screen 2003; 10:117–122.
- Hol L, Wilschut JA, van Ballegooijen M, et al. Screening for colorectal cancer: random comparison of guaiac and immunochemical faecal occult blood testing at different cut-off levels. Br J Cancer 2009; 100:1103–1110.
- Edwards BK, Ward E, Kohler BA, et al. Annual report to the nation on the status of cancer, 1975–2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates. Cancer 2010; 116:544–573.
- Centers for Disease Control and Prevention (CDC). Vital signs: colorectal cancer screening among adults aged 50–75 years—United States, 2008. MMWR Morb Mortal Wkly Rep 2010; 59:808–812.
- Mandel JS, Church TR, Bond JH, et al. The effect of fecal occult-blood screening on the incidence of colorectal cancer. N Engl J Med 2000; 343:1603–1607.
- Levin B, Lieberman DA, McFarland B, et al; American Cancer Society Colorectal Cancer Advisory Group; US Multi-Society Task Force; American College of Radiology Colon Cancer Committee. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. Gastroenterology 2008; 134:1570–1595.
- US Preventive Services Task Force. Screening for colorectal cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2008; 149:627–637.
- Rex DK, Johnson DA, Anderson JC, Schoenfeld PS, Burke CA, Inadomi JM; American College of Gastroenterology. American College of Gastroenterology guidelines for colorectal cancer screening 2009 [corrected]. Am J Gastroenterol 2009; 104:739–750.
- Vu HT, Burke CA. Advances in colorectal cancer screening. Curr Gastroenterol Rep 2009; 11:406–412.
- Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomised controlled trial of faecal-occult-blood screening for colorectal cancer. Lancet 1996; 348:1472–1477.
- Kronborg O, Fenger C, Olsen J, Jørgensen OD, Søndergaard O. Randomised study of screening for colorectal cancer with faecal-occult-blood test. Lancet 1996; 348:1467–1471.
- Mandel JS, Bond JH, Church TR, et al. Reducing mortality from colorectal cancer by screening for fecal occult blood. Minnesota Colon Cancer Control Study. N Engl J Med 1993; 328:1365–1371.
- Kewenter J, Brevinge H, Engarås B, Haglind E, Ahrén C. Results of screening, rescreening, and follow-up in a prospective randomized study for detection of colorectal cancer by fecal occult blood testing. Results for 68,308 subjects. Scand J Gastroenterol 1994; 29:468–473.
- Hewitson P, Glasziou P, Irwig L, Towler B, Watson E. Screening for colorectal cancer using the faecal occult blood test, Hemoccult. Cochrane Database Syst Rev 2007;CD001216.
- Allison JE, Tekawa IS, Ransom LJ, Adrain AL. A comparison of fecal occult-blood tests for colorectal-cancer screening. N Engl J Med 1996; 334:155–159.
- Smith A, Young GP, Cole SR, Bampton P. Comparison of a brush-sampling fecal immunochemical test for hemoglobin with a sensitive guaiac-based fecal occult blood test in detection of colorectal neoplasia. Cancer 2006; 107:2152–2159.
- Guittet L, Bouvier V, Mariotte N, et al. Comparison of a guaiac based and an immunochemical faecal occult blood test in screening for colorectal cancer in a general average risk population. Gut 2007; 56:210–214.
- van Rossum LG, van Rijn AF, Laheij RJ, et al. Random comparison of guaiac and immunochemical fecal occult blood tests for colorectal cancer in a screening population. Gastroenterology 2008; 135:82–90.
- Park DI, Ryu S, Kim YH, et al. Comparison of guaiac-based and quantitative immunochemical fecal occult blood testing in a population at average risk undergoing colorectal cancer screening. Am J Gastroenterol 2010; 105:2017–2025.
- Hundt S, Haug U, Brenner H. Comparative evaluation of immunochemical fecal occult blood tests for colorectal adenoma detection. Ann Intern Med 2009; 150:162–169.
- Itoh M, Takahashi K, Nishida H, Sakagami K, Okubo T. Estimation of the optimal cut off point in a new immunological faecal occult blood test in a corporate colorectal cancer screening programme. J Med Screen 1996; 3:66–71.
- Cole SR, Young GP, Esterman A, Cadd B, Morcom J. A randomised trial of the impact of new faecal haemoglobin test technologies on population participation in screening for colorectal cancer. J Med Screen 2003; 10:117–122.
- Hol L, Wilschut JA, van Ballegooijen M, et al. Screening for colorectal cancer: random comparison of guaiac and immunochemical faecal occult blood testing at different cut-off levels. Br J Cancer 2009; 100:1103–1110.
KEY POINTS
- Hemoccult Sensa and several fecal immunochemical tests are more sensitive than Hemoccult and Hemoccult II for detecting colorectal cancer and advanced adenomas, with similar specificity.
- In most screening studies, fecal immunochemical tests have been more sensitive than guaiac-based tests. In addition, rates of adherence were higher, likely because dietary and medication restrictions are not needed and fewer stool samples are required.
- Better compliance should improve participation in colorectal cancer screening and reduce colorectal cancer mortality rates.
Cardiovascular implantable electronic device infection: A complication of medical progress
The term cardiovascular implantable electronic device (CIED) includes both permanent pacemakers and implantable cardioverter-defibrillators. These devices are being implanted in more people every year.1 They have also become increasingly sophisticated, with newer devices capable of both pacing and cardioversion-defibrillation functions.2 Patients receiving these devices are also increasingly older and have more comorbid conditions.3,4 As more CIEDs are placed in older and sicker patients, infections of these devices can be expected to be encountered with increasing frequency.
In this issue of the Cleveland Clinic Journal of Medicine, Dababneh and Sohail5 review CIED infections and provide a stepwise approach to their diagnosis and treatment.
HOW THE DEVICES BECOME INFECTED
CIEDs can become infected during implantation, in which case the infection presents early on, usually with pocket manifestations, or by secondary hematogenous seeding, in which case the infection generally presents with endovascular manifestations. Dababneh and Sohail have elegantly outlined the risk factors that predispose to infection of these devices.
If there are no early complications, patients generally do well with these devices. However, many patients do fine with their first device but develop a pocket infection when the pulse generator is changed because of battery depletion or other reasons. When patients with a CIED develop bacteremia as a complication of a vascular catheter infection or other infection, particularly with Staphylococcus aureus, they are at increased risk of having the intravascular portion of their device seeded.
PATIENTS MAY NOT APPEAR VERY ILL AT PRESENTATION
Dababneh and Sohail divide the clinical presentations of CIED infection into two broad categories: pocket infection and endovascular infection with an intact pocket. This is a useful categorization, as it provides a clue to pathogenesis.
As the authors point out, most patients with CIED infection present first to their primary care physician when they develop symptoms. An understanding of this infection by primary care physicians will allow for early recognition and more timely treatment, thus avoiding unnecessary complications.
Patients with pocket infection may not appear ill, but this should not lead a clinician away from the diagnosis. A pocket hematoma is an important differential diagnosis in the early postoperative period after device implantation or pulse generator change, and it may be difficult to decide if pocket changes are from an uninfected hematoma or from an infection.
Patients with endovascular infection are more likely to have systemic symptoms such as fever, fatigue, and malaise. However, absence of systemic features does not necessarily exclude endovascular infection.
BLOOD CULTURES AND TEE ARE KEY DIAGNOSTIC TESTS
All patients with suspected CIED infection should have at least two sets of blood cultures checked, even if they appear to be reasonably well. If there is any suspicion of endovascular infection, echocardiography should be performed.
Transesophageal echocardiography (TEE) is far superior to transthoracic echocardiography (TTE) for detecting lead vegetations.6 TEE should be carefully performed whenever endovascular infection is suspected, including all patients with positive blood cultures and all patients with systemic signs and symptoms.
Purulent drainage should be cultured, and when the device is removed, cultures of lead tips and pocket tissue should be done as well.
TREATMENT USUALLY REQUIRES COMPLETE DEVICE REMOVAL
A superficial infection in the early postoperative period may respond to antibiotic therapy alone. But in all other patients, the device must be removed to cure the infection. In referral centers, it is not unusual to see patients who have been referred after having been treated with antibiotics for weeks and sometimes months in the mistaken belief that the infection would be cured with antibiotics alone.
In some patients presenting with only pocket findings in the early postoperative period, it may be difficult indeed to tell if there is pocket infection. In such patients, it is not necessary to make a hasty decision to remove the device, but it is important to monitor them closely until the presence or absence of infection becomes clear. Also, erosion of the device through the skin represents pocket infection even if the patient appears otherwise healthy.
When removing the device, it is necessary to remove the generator and all leads to treat the infection effectively.
If patients are device-dependent, it is usually safe to place a new device with the new pulse generator pocket in a different location from the infected one a few days after the infected device is removed.
AREAS OF UNCERTAINTY AND CHALLENGE
Although there is no controversy about the need for complete removal of infected devices in order to effect a cure, the appropriate duration of antibiotic therapy after device removal is less clear. Dababneh and Sohail provide a useful algorithm to help with this decision. Patients usually need a new device to replace the infected one and there is a legitimate reason for concern about undertreating, since one would not want the new device to become infected because of inadequate antibiotic therapy. When endovascular infection is suspected or documented, patients are probably best treated as they would be for infective endocarditis.
Difficulties arise when patients with a CIED develop bacteremia with no echocardiographic evidence of device infection. Finding the source of bacteremia is very important because a diagnosis of CIED infection indicates that the device has to be removed. When there is a clear alternative explanation for the bacteremia, the CIED does not have to be removed. The type of bacterium helps clinicians to gauge the likelihood of CIED infection and to decide on the appropriate course of action. These cases should always be managed in conjunction with an infectious disease specialist and a cardiac electrophysiologist.
Another concern is secondary seeding of an uninfected CIED caused by bacteremia from another source. This concern is particularly acute with S aureus bacteremia. When patients with a CIED and S aureus bacteremia have been studied, endovascular CIED infection was documented in about half, although only a few had evidence of pocket inflammation.7,8 This suggests that the devices were seeded via the endovascular route.
Medical procedures such as dialysis and total parenteral nutrition require frequent intravascular access—often facilitated by leaving an indwelling vascular catheter in place. Frequent entry into the intravascular compartment puts patients at substantial risk of bloodstream infection, and in patients with a CIED this can be complicated by device infection. In patients with a CIED and an indwelling vascular catheter who develop bacteremia, determining the source of the bacteremia is particularly challenging, as is the treatment. Thus, preventing endovascular infection in such patients is extremely desirable, but there are no easy solutions.
PLACING CIED INFECTIONS IN PERSPECTIVE
The vast majority of patients with a CIED never develop a device infection. Those unfortunate enough to have a CIED infection have little choice other than to have the device removed, but those diagnosed early and treated appropriately generally do well. The development of CIEDs has been an important advance in the practice of cardiac electrophysiology. An appropriate understanding of CIED infection and its treatment will help optimize the diagnosis and management of this complication when it does occur.
- Zhan C, Baine WB, Sedrakyan A, Steiner C. Cardiac device implantation in the United States from 1997 through 2004: a population-based analysis. J Gen Intern Med 2008; 23(suppl 1):13–19.
- Hayes DL, Furman S. Cardiac pacing: how it started, where we are, where we are going. J Cardiovasc Electrophysiol 2004; 15:619–627.
- Lin G, Meverden RA, Hodge DO, Uslan DZ, Hayes DL, Brady PA. Age and gender trends in implantable cardioverter defibrillator utilization: a population based study. J Interv Card Electrophysiol 2008; 22:65–70.
- Uslan DZ, Tleyjeh IM, Baddour LM, et al. Temporal trends in permanent pacemaker implantation: a population-based study. Am Heart J 2008; 155:896–903.
- Dababneh SR, Sohail MR. Cardiovascular implantable electronic device infection: a stepwise approach to diagnosis and management. Cleve Clin J Med 2011; 78:529–537.
- Victor F, De Place C, Camus C, et al. Pacemaker lead infection: echocardiographic features, management, and outcome. Heart 1999; 81:82–97.
- Chamis AL, Peterson GE, Cabell CH, et al. Staphylococcus aureus bacteremia in patients with permanent pacemakers or implantable cardioverter-defibrillators. Circulation 2001; 104:1029–1033.
- Uslan DZ, Sohail MR, St Sauver JL, et a.l Permanent pacemaker and implantable cardioverter defibrillator infection: a population-based study. Arch Intern Med 2007; 167:669–675.
The term cardiovascular implantable electronic device (CIED) includes both permanent pacemakers and implantable cardioverter-defibrillators. These devices are being implanted in more people every year.1 They have also become increasingly sophisticated, with newer devices capable of both pacing and cardioversion-defibrillation functions.2 Patients receiving these devices are also increasingly older and have more comorbid conditions.3,4 As more CIEDs are placed in older and sicker patients, infections of these devices can be expected to be encountered with increasing frequency.
In this issue of the Cleveland Clinic Journal of Medicine, Dababneh and Sohail5 review CIED infections and provide a stepwise approach to their diagnosis and treatment.
HOW THE DEVICES BECOME INFECTED
CIEDs can become infected during implantation, in which case the infection presents early on, usually with pocket manifestations, or by secondary hematogenous seeding, in which case the infection generally presents with endovascular manifestations. Dababneh and Sohail have elegantly outlined the risk factors that predispose to infection of these devices.
If there are no early complications, patients generally do well with these devices. However, many patients do fine with their first device but develop a pocket infection when the pulse generator is changed because of battery depletion or other reasons. When patients with a CIED develop bacteremia as a complication of a vascular catheter infection or other infection, particularly with Staphylococcus aureus, they are at increased risk of having the intravascular portion of their device seeded.
PATIENTS MAY NOT APPEAR VERY ILL AT PRESENTATION
Dababneh and Sohail divide the clinical presentations of CIED infection into two broad categories: pocket infection and endovascular infection with an intact pocket. This is a useful categorization, as it provides a clue to pathogenesis.
As the authors point out, most patients with CIED infection present first to their primary care physician when they develop symptoms. An understanding of this infection by primary care physicians will allow for early recognition and more timely treatment, thus avoiding unnecessary complications.
Patients with pocket infection may not appear ill, but this should not lead a clinician away from the diagnosis. A pocket hematoma is an important differential diagnosis in the early postoperative period after device implantation or pulse generator change, and it may be difficult to decide if pocket changes are from an uninfected hematoma or from an infection.
Patients with endovascular infection are more likely to have systemic symptoms such as fever, fatigue, and malaise. However, absence of systemic features does not necessarily exclude endovascular infection.
BLOOD CULTURES AND TEE ARE KEY DIAGNOSTIC TESTS
All patients with suspected CIED infection should have at least two sets of blood cultures checked, even if they appear to be reasonably well. If there is any suspicion of endovascular infection, echocardiography should be performed.
Transesophageal echocardiography (TEE) is far superior to transthoracic echocardiography (TTE) for detecting lead vegetations.6 TEE should be carefully performed whenever endovascular infection is suspected, including all patients with positive blood cultures and all patients with systemic signs and symptoms.
Purulent drainage should be cultured, and when the device is removed, cultures of lead tips and pocket tissue should be done as well.
TREATMENT USUALLY REQUIRES COMPLETE DEVICE REMOVAL
A superficial infection in the early postoperative period may respond to antibiotic therapy alone. But in all other patients, the device must be removed to cure the infection. In referral centers, it is not unusual to see patients who have been referred after having been treated with antibiotics for weeks and sometimes months in the mistaken belief that the infection would be cured with antibiotics alone.
In some patients presenting with only pocket findings in the early postoperative period, it may be difficult indeed to tell if there is pocket infection. In such patients, it is not necessary to make a hasty decision to remove the device, but it is important to monitor them closely until the presence or absence of infection becomes clear. Also, erosion of the device through the skin represents pocket infection even if the patient appears otherwise healthy.
When removing the device, it is necessary to remove the generator and all leads to treat the infection effectively.
If patients are device-dependent, it is usually safe to place a new device with the new pulse generator pocket in a different location from the infected one a few days after the infected device is removed.
AREAS OF UNCERTAINTY AND CHALLENGE
Although there is no controversy about the need for complete removal of infected devices in order to effect a cure, the appropriate duration of antibiotic therapy after device removal is less clear. Dababneh and Sohail provide a useful algorithm to help with this decision. Patients usually need a new device to replace the infected one and there is a legitimate reason for concern about undertreating, since one would not want the new device to become infected because of inadequate antibiotic therapy. When endovascular infection is suspected or documented, patients are probably best treated as they would be for infective endocarditis.
Difficulties arise when patients with a CIED develop bacteremia with no echocardiographic evidence of device infection. Finding the source of bacteremia is very important because a diagnosis of CIED infection indicates that the device has to be removed. When there is a clear alternative explanation for the bacteremia, the CIED does not have to be removed. The type of bacterium helps clinicians to gauge the likelihood of CIED infection and to decide on the appropriate course of action. These cases should always be managed in conjunction with an infectious disease specialist and a cardiac electrophysiologist.
Another concern is secondary seeding of an uninfected CIED caused by bacteremia from another source. This concern is particularly acute with S aureus bacteremia. When patients with a CIED and S aureus bacteremia have been studied, endovascular CIED infection was documented in about half, although only a few had evidence of pocket inflammation.7,8 This suggests that the devices were seeded via the endovascular route.
Medical procedures such as dialysis and total parenteral nutrition require frequent intravascular access—often facilitated by leaving an indwelling vascular catheter in place. Frequent entry into the intravascular compartment puts patients at substantial risk of bloodstream infection, and in patients with a CIED this can be complicated by device infection. In patients with a CIED and an indwelling vascular catheter who develop bacteremia, determining the source of the bacteremia is particularly challenging, as is the treatment. Thus, preventing endovascular infection in such patients is extremely desirable, but there are no easy solutions.
PLACING CIED INFECTIONS IN PERSPECTIVE
The vast majority of patients with a CIED never develop a device infection. Those unfortunate enough to have a CIED infection have little choice other than to have the device removed, but those diagnosed early and treated appropriately generally do well. The development of CIEDs has been an important advance in the practice of cardiac electrophysiology. An appropriate understanding of CIED infection and its treatment will help optimize the diagnosis and management of this complication when it does occur.
The term cardiovascular implantable electronic device (CIED) includes both permanent pacemakers and implantable cardioverter-defibrillators. These devices are being implanted in more people every year.1 They have also become increasingly sophisticated, with newer devices capable of both pacing and cardioversion-defibrillation functions.2 Patients receiving these devices are also increasingly older and have more comorbid conditions.3,4 As more CIEDs are placed in older and sicker patients, infections of these devices can be expected to be encountered with increasing frequency.
In this issue of the Cleveland Clinic Journal of Medicine, Dababneh and Sohail5 review CIED infections and provide a stepwise approach to their diagnosis and treatment.
HOW THE DEVICES BECOME INFECTED
CIEDs can become infected during implantation, in which case the infection presents early on, usually with pocket manifestations, or by secondary hematogenous seeding, in which case the infection generally presents with endovascular manifestations. Dababneh and Sohail have elegantly outlined the risk factors that predispose to infection of these devices.
If there are no early complications, patients generally do well with these devices. However, many patients do fine with their first device but develop a pocket infection when the pulse generator is changed because of battery depletion or other reasons. When patients with a CIED develop bacteremia as a complication of a vascular catheter infection or other infection, particularly with Staphylococcus aureus, they are at increased risk of having the intravascular portion of their device seeded.
PATIENTS MAY NOT APPEAR VERY ILL AT PRESENTATION
Dababneh and Sohail divide the clinical presentations of CIED infection into two broad categories: pocket infection and endovascular infection with an intact pocket. This is a useful categorization, as it provides a clue to pathogenesis.
As the authors point out, most patients with CIED infection present first to their primary care physician when they develop symptoms. An understanding of this infection by primary care physicians will allow for early recognition and more timely treatment, thus avoiding unnecessary complications.
Patients with pocket infection may not appear ill, but this should not lead a clinician away from the diagnosis. A pocket hematoma is an important differential diagnosis in the early postoperative period after device implantation or pulse generator change, and it may be difficult to decide if pocket changes are from an uninfected hematoma or from an infection.
Patients with endovascular infection are more likely to have systemic symptoms such as fever, fatigue, and malaise. However, absence of systemic features does not necessarily exclude endovascular infection.
BLOOD CULTURES AND TEE ARE KEY DIAGNOSTIC TESTS
All patients with suspected CIED infection should have at least two sets of blood cultures checked, even if they appear to be reasonably well. If there is any suspicion of endovascular infection, echocardiography should be performed.
Transesophageal echocardiography (TEE) is far superior to transthoracic echocardiography (TTE) for detecting lead vegetations.6 TEE should be carefully performed whenever endovascular infection is suspected, including all patients with positive blood cultures and all patients with systemic signs and symptoms.
Purulent drainage should be cultured, and when the device is removed, cultures of lead tips and pocket tissue should be done as well.
TREATMENT USUALLY REQUIRES COMPLETE DEVICE REMOVAL
A superficial infection in the early postoperative period may respond to antibiotic therapy alone. But in all other patients, the device must be removed to cure the infection. In referral centers, it is not unusual to see patients who have been referred after having been treated with antibiotics for weeks and sometimes months in the mistaken belief that the infection would be cured with antibiotics alone.
In some patients presenting with only pocket findings in the early postoperative period, it may be difficult indeed to tell if there is pocket infection. In such patients, it is not necessary to make a hasty decision to remove the device, but it is important to monitor them closely until the presence or absence of infection becomes clear. Also, erosion of the device through the skin represents pocket infection even if the patient appears otherwise healthy.
When removing the device, it is necessary to remove the generator and all leads to treat the infection effectively.
If patients are device-dependent, it is usually safe to place a new device with the new pulse generator pocket in a different location from the infected one a few days after the infected device is removed.
AREAS OF UNCERTAINTY AND CHALLENGE
Although there is no controversy about the need for complete removal of infected devices in order to effect a cure, the appropriate duration of antibiotic therapy after device removal is less clear. Dababneh and Sohail provide a useful algorithm to help with this decision. Patients usually need a new device to replace the infected one and there is a legitimate reason for concern about undertreating, since one would not want the new device to become infected because of inadequate antibiotic therapy. When endovascular infection is suspected or documented, patients are probably best treated as they would be for infective endocarditis.
Difficulties arise when patients with a CIED develop bacteremia with no echocardiographic evidence of device infection. Finding the source of bacteremia is very important because a diagnosis of CIED infection indicates that the device has to be removed. When there is a clear alternative explanation for the bacteremia, the CIED does not have to be removed. The type of bacterium helps clinicians to gauge the likelihood of CIED infection and to decide on the appropriate course of action. These cases should always be managed in conjunction with an infectious disease specialist and a cardiac electrophysiologist.
Another concern is secondary seeding of an uninfected CIED caused by bacteremia from another source. This concern is particularly acute with S aureus bacteremia. When patients with a CIED and S aureus bacteremia have been studied, endovascular CIED infection was documented in about half, although only a few had evidence of pocket inflammation.7,8 This suggests that the devices were seeded via the endovascular route.
Medical procedures such as dialysis and total parenteral nutrition require frequent intravascular access—often facilitated by leaving an indwelling vascular catheter in place. Frequent entry into the intravascular compartment puts patients at substantial risk of bloodstream infection, and in patients with a CIED this can be complicated by device infection. In patients with a CIED and an indwelling vascular catheter who develop bacteremia, determining the source of the bacteremia is particularly challenging, as is the treatment. Thus, preventing endovascular infection in such patients is extremely desirable, but there are no easy solutions.
PLACING CIED INFECTIONS IN PERSPECTIVE
The vast majority of patients with a CIED never develop a device infection. Those unfortunate enough to have a CIED infection have little choice other than to have the device removed, but those diagnosed early and treated appropriately generally do well. The development of CIEDs has been an important advance in the practice of cardiac electrophysiology. An appropriate understanding of CIED infection and its treatment will help optimize the diagnosis and management of this complication when it does occur.
- Zhan C, Baine WB, Sedrakyan A, Steiner C. Cardiac device implantation in the United States from 1997 through 2004: a population-based analysis. J Gen Intern Med 2008; 23(suppl 1):13–19.
- Hayes DL, Furman S. Cardiac pacing: how it started, where we are, where we are going. J Cardiovasc Electrophysiol 2004; 15:619–627.
- Lin G, Meverden RA, Hodge DO, Uslan DZ, Hayes DL, Brady PA. Age and gender trends in implantable cardioverter defibrillator utilization: a population based study. J Interv Card Electrophysiol 2008; 22:65–70.
- Uslan DZ, Tleyjeh IM, Baddour LM, et al. Temporal trends in permanent pacemaker implantation: a population-based study. Am Heart J 2008; 155:896–903.
- Dababneh SR, Sohail MR. Cardiovascular implantable electronic device infection: a stepwise approach to diagnosis and management. Cleve Clin J Med 2011; 78:529–537.
- Victor F, De Place C, Camus C, et al. Pacemaker lead infection: echocardiographic features, management, and outcome. Heart 1999; 81:82–97.
- Chamis AL, Peterson GE, Cabell CH, et al. Staphylococcus aureus bacteremia in patients with permanent pacemakers or implantable cardioverter-defibrillators. Circulation 2001; 104:1029–1033.
- Uslan DZ, Sohail MR, St Sauver JL, et a.l Permanent pacemaker and implantable cardioverter defibrillator infection: a population-based study. Arch Intern Med 2007; 167:669–675.
- Zhan C, Baine WB, Sedrakyan A, Steiner C. Cardiac device implantation in the United States from 1997 through 2004: a population-based analysis. J Gen Intern Med 2008; 23(suppl 1):13–19.
- Hayes DL, Furman S. Cardiac pacing: how it started, where we are, where we are going. J Cardiovasc Electrophysiol 2004; 15:619–627.
- Lin G, Meverden RA, Hodge DO, Uslan DZ, Hayes DL, Brady PA. Age and gender trends in implantable cardioverter defibrillator utilization: a population based study. J Interv Card Electrophysiol 2008; 22:65–70.
- Uslan DZ, Tleyjeh IM, Baddour LM, et al. Temporal trends in permanent pacemaker implantation: a population-based study. Am Heart J 2008; 155:896–903.
- Dababneh SR, Sohail MR. Cardiovascular implantable electronic device infection: a stepwise approach to diagnosis and management. Cleve Clin J Med 2011; 78:529–537.
- Victor F, De Place C, Camus C, et al. Pacemaker lead infection: echocardiographic features, management, and outcome. Heart 1999; 81:82–97.
- Chamis AL, Peterson GE, Cabell CH, et al. Staphylococcus aureus bacteremia in patients with permanent pacemakers or implantable cardioverter-defibrillators. Circulation 2001; 104:1029–1033.
- Uslan DZ, Sohail MR, St Sauver JL, et a.l Permanent pacemaker and implantable cardioverter defibrillator infection: a population-based study. Arch Intern Med 2007; 167:669–675.
Cardiovascular implantable electronic device infection: A stepwise approach to diagnosis and management
These days, an increasing number of people are receiving permanent pacemakers, implantable cardioverter-defibrillators, endovascular devices, and cardiac resynchronization therapy devices—collectively called cardiovascular implantable electronic devices (CIEDs). One reason for this upswing is that these devices have been approved for more indications, such as sick sinus syndrome, third-degree heart block, atrial fibrillation, life-threatening ventricular arrhythmias, survival of sudden cardiac death, and advanced congestive heart failure. Another reason is that the population is getting older, and therefore more people need these devices.
Although the use of a CIED is associated with a lower risk of death and a better quality of life, CIED-related infection can eclipse some of these benefits for their recipients. Historically reported rates of infections range from 0% to 19.9%.1 However, recent data point to a disturbing trend: infection rates are rising faster than implantation rates.2
Besides causing morbidity and even death, infection is also associated with significant financial cost for patients and third-party payers. The estimated average cost of combined medical and surgical treatment of CIED-related infection ranges from $25,000 for permanent pacemakers to $50,000 for implantable cardioverter-defibrillators.3,4
Although cardiologists and cardiac surgeons are the ones who implant these devices, most patients receive their routine outpatient care from a primary care physician, who can be a general internist, a family physician, or other specialist. Moreover, many patients with device infection are admitted to hospital internal medicine services for various diagnoses requiring inpatient care. Therefore, an internist, a family physician, or a hospitalist may be the first physician to respond to a suspected or confirmed device infection. Knowledge of the clinical manifestations and the initial steps in evaluation and management is essential for optimal care.
These complex infections pose challenges, which we will illustrate by presenting a case of CIED-related infection and reviewing key elements of diagnosis and management.
AN ILLUSTRATIVE CASE
A 60-year-old man had a permanent pacemaker implanted 3 months ago because of third-degree heart block; he now presents to his primary care physician with increasing pain, swelling, and erythema at the site of his pacemaker pocket. He has a history of type 2 diabetes mellitus, stage 3 chronic kidney disease, and coronary artery disease.
The symptoms started 2 weeks ago and have slowly progressed, prompting him to seek medical care. He is quite anxious and wants to know if he needs to arrange an emergency consultation with his cardiologist.
IMPORTANT CLINICAL QUESTIONS
This presentation raises several important questions:
- What should be the next step in his evaluation?
- Which laboratory tests should be done?
- Should he be admitted to the hospital, or can he be managed as an outpatient?
- Should he be started empirically on antibiotics? If so, which antibiotics? Or is it better to wait?
- When should an infectious disease specialist be consulted?
- Should the device be removed, and if so, all of it or which components?
- How long should antibiotics be given?
We will provide evidence-based answers to these questions in the discussions below.
PATHOGENESIS AND RISK FACTORS FOR DEVICE INFECTION
The first step in understanding the clinical manifestations of CIED-related infections is to grasp their pathogenesis. Risk factors for device infection have been evaluated in several studies.1
Several factors interact in the inception and evolution of these infections, some related to the care in the perioperative period, some to the device, some to the host, and some to the causative microorganism.5 Although any one of these may play a predominant role in a given patient, most patients have a combination.
Perioperative factors that may contribute to a higher risk of infection include device revision; use of temporary pacing leads before placement of the permanent device; lack of antibiotic prophylaxis before implantation; longer operative time; operative inexperience; development of postoperative pocket hematoma; and factors such as diabetes mellitus and long-term use of corticosteroids and other immunosuppressive drugs that impair wound healing at the generator pocket.6–11
Device factors. Abdominal generator placement, use of epicardial leads, and complexity of the device play a significant role.6,12,13 In general, implantable cardioverter-defibrillators and cardiac resynchronization therapy devices have higher rates of infection than permanent pacemakers.2,14
Host factors. Diseases and conditions that predispose to bloodstream infection may result in hematogenous seeding of the device and its leads and are associated with a higher risk of late-onset infection. These include an implanted central venous catheter (for hemodialysis or other long-term access), a distant focus of primary infection (such as pneumonia and skin and soft-tissue infections), and invasive procedures unrelated to the CIED.10,15
In general, contamination at the time of surgery leads to early-onset infection (ie, within weeks to months of implantation), whereas hematogenous seeding is a predominant factor in most patients with late-onset infection.16
STAPHYLOCOCCI ARE THE MOST COMMON CAUSE
A key to making an accurate diagnosis and determining the appropriate empiric antibiotic therapy is to understand the microbiology of device infections.
Regardless of the clinical presentation, staphylococci are the predominant organisms responsible for both early- and late-onset infections.17,18 These include Staphylococcus aureus and coagulase-negative staphylococci. Depending on where the implanting hospital is located and where the organism was acquired (in the community or in the hospital), up to 50% of these staphylococci may be methicillin-resistant,17,18 a fact that necessitates using vancomycin for empiric coverage until the pathogen is identified and its susceptibility is known.
Gram-negative or polymicrobial CIED infections are infrequent. However, empiric gram-negative coverage should be considered for patients who present with systemic signs of infection, in whom delaying adequate coverage could jeopardize the successful outcome of infection treatment.
Fungal and mycobacterial infections of cardiac devices are exceedingly uncommon, mainly occurring in immunocompromised patients.
CLINICAL MANIFESTATIONS OF CARDIOVASCULAR DEVICE INFECTION
The clinical presentations of CIED-related infection can be broadly categorized into two groups: generator pocket infection and endovascular infection with an intact pocket.17,18
Generator pocket infection
Most patients with a pocket infection present with inflammatory changes at the device generator site. Usual signs and symptoms include pain, erythema, swelling, and serosanguinous or purulent drainage from the pocket.
Patients with a pocket infection generally present within weeks to months of implantation, as the predominant mechanism of pocket infection is contamination of the generator or leads during implantation. However, occasionally, pocket infection caused by indolent organisms such as Propionibacterium, Corynebacterium, and certain species of coagulase-negative staphylococci can present more than 1 year after implantation. Hematogenous seeding of the device pocket, as a result of bacteremia from a distant primary focus, is infrequent except in cases of S aureus bloodstream infection.19
Endovascular infection with an intact pocket
A subset of patients with CIED-related infections, mostly late-onset infections, present only with systemic signs and symptoms without inflammatory changes at the generator pocket.16–18 Most of these patients have multiple comorbid conditions and likely acquire the infection via hematogenous seeding of transvenous device leads from a distant focus of primary infection, such as a skin or soft-tissue infection, pneumonia, bacteremia arising from an implanted long-term central venous catheter, or bloodstream infection secondary to an invasive procedure unrelated to the CIED.
Most patients with an endovascular device infection have positive blood cultures at presentation. However, occasionally, blood cultures may be negative. The main reason for negative blood cultures in this setting is the use of empiric antibiotic therapy before blood cultures are drawn.
Endovascular device infections are further complicated by the formation of infected vegetations on the leads or cardiac valves in up to one-fourth of cases.16–18,20,21 This complication poses additional challenges in management, such as choosing the appropriate lead extraction technique, the waiting time before implanting a replacement device, and the optimal length of parenteral antimicrobial therapy. Many of these decisions are beyond the realm of internal medicine practice and are best managed by consultation with an infectious disease specialist and a cardiologist.
DIAGNOSIS OF INFECTION AND ASSOCIATED COMPLICATIONS
The clinical diagnosis of pocket infection is usually quite straightforward. However, occasionally, an early postoperative pocket hematoma can mimic pocket infection, and distinguishing these two may be difficult. Close collaboration between an internist, cardiologist, and infectious-disease specialist and careful observation of the patient may help to avoid a premature and incorrect diagnosis of pocket infection and unnecessary removal of the device in this scenario.
While diagnosing a pocket infection may be simple, an accurate and timely diagnosis of endovascular infection with an intact pocket can be challenging, especially if echocardiography shows no conclusive evidence of involvement of the device leads. Even when the infection is limited to the generator pocket, attempts to isolate causative pathogens may be hampered if empiric antibiotic therapy is started before culture samples are obtained from the pocket and from the blood.
Complete blood count with differential cell count.
Electrolyte and serum creatinine concentrations.
Inflammatory markers, including erythrocyte sedimentation rate and C-reactive protein concentration.
Swabs for bacterial cultures should be sent if there is purulent drainage from the generator pocket. This can be done in the office before referral to the emergency department or a tertiary care center for inpatient admission. If the pocket appears swollen or fluctuant, needle aspiration should be avoided, as it can introduce organisms and cause contamination.5
Two sets of peripheral blood cultures should be obtained. If the patient has an implanted central venous catheter, blood cultures via each catheter port should also be obtained, as they may help to pinpoint the source of bloodstream infection in cases in which blood culture results are positive.
TEE should also be performed in patients with systemic signs and symptoms (such as fever, chills, malaise, dyspnea, hypotension, or peripheral stigmata of endocarditis) or abnormal test results (leukocytosis, elevated inflammatory markers, or evidence of pulmonary emboli on imaging), even if blood cultures are negative. Similarly, TEE should also be considered in patients in whom blood cultures may be negative as a result of previous antimicrobial therapy.
If a decision is made to remove the device (see below), intraoperative pocket tissue and lead-tip cultures should be sent for Gram staining and bacterial culture. Fungal and mycobacterial cultures may be necessary in immunocompromised hosts, or if Gram staining and bacterial cultures from pocket tissue samples are negative. Caution must be exercised when interpreting the results of lead-tip cultures, as lead tips may become contaminated while being pulled through an infected pocket during removal.20,22
This approach should lead to an accurate diagnosis of CIED-related infection and associated complications in most patients. However, the diagnosis may remain elusive if results of blood cultures are positive but the pocket is intact and there is no echocardiographic evidence of lead or valve involvement. This is especially true in cases of S aureus bacteremia, in which positive blood cultures may be the sole manifestation of underlying device infection.19,23 Factors associated with higher odds of underlying device infection in this scenario include bacteremia lasting more than 24 hours, prosthetic valves, bacteremia within 3 months of device implantation, and no alternative focus of bacteremia.12
Evidence is emerging that underlying device infection should also be considered in patients with bloodstream infection with coagulase-negative staphylococci in the setting of an implanted device.24 On the other hand, seeding of device leads with gram-negative organisms is infrequent, and routine imaging of intracardiac leads is not necessary in cases of gram-negative bacteremia.25
In our opinion, cases of bacteremia in which underlying occult device infection is a concern are best managed by consultation with an infectious disease specialist.
A STEPWISE APPROACH TO MANAGING DEVICE INFECTION
Should antibiotics be started empirically?
The first step in managing CIED-related infection is to decide whether empiric antibiotic therapy should be started immediately once infection is suspected or if it is prudent to wait until the culture results are available.
In our opinion, if the infection is limited to the generator pocket, it is reasonable to wait until immediately before surgery to maximize the culture yield from pocket tissue samples. An exception to this rule is when systemic signs or symptoms are present, in which case delaying antibiotic therapy could jeopardize the outcome (FIGURE 2). In such cases, empiric antibiotic therapy can be started once two sets of peripheral blood samples for cultures have been obtained.
Which antibiotics should be given empirically?
Because gram-positive organisms, namely coagulase-negative staphylococci and S aureus, are the causative pathogens in most cases of CIED-related infection, empiric antibiotic therapy should provide adequate coverage for these organisms. Because methicillin resistance is quite prevalent in staphylococci, we routinely use vancomycin (Vancocin) for empiric coverage. In patients who are allergic to vancomycin or cannot tolerate it, daptomycin (Cubicin) is an alternative.
Empiric gram-negative coverage is generally reserved for patients who present with systemic signs and symptoms, in whom delaying adequate coverage could have untoward consequences. We routinely use cefepime (Maxipime) for empiric gram-negative coverage in our institution. Other beta-lactam agents that provide coverage for gram-negative bacilli, especially Pseudomonas, are also appropriate in this setting.
Should the device be removed?
Superficial infection of the wound or incision site (eg, stitch abscess) early after implantation can be managed by conservative antibiotic therapy without removing the device. However, complete removal of the device system, including intracardiac leads, is necessary in all other presentations of device infection, even if the infection appears limited to the generator pocket.5,12 Leaving the device in place or removing parts of the device is associated with persistent or relapsed infection and is not advisable.17,26
Leaving the device in place may be necessary in extenuating circumstances, eg, if surgery would be too risky for the patient or if the patient refuses device removal or has a short life expectancy. In these cases, lifelong suppressive antibiotic therapy should be prescribed after an initial course of parenteral antibiotics.27 Antibiotic choices for long-term suppressive therapy should be guided by antimicrobial susceptibility testing and consultation with an infectious disease specialist.
How should the leads be removed?
Leads are extracted percutaneously in most cases. Percutaneous extraction is generally considered safe even in cases in which infection is complicated by lead vegetations, which raises concern about pulmonary embolization of detached vegetation fragments during extraction.5,20
Thoracotomy is generally reserved for patients who have cardiac complications (such as a cardiac abscess or the need to replace cardiac valves) or in whom attempts to extract the leads percutaneously are unsuccessful.
Details of the removal procedure and choice of extraction technique are beyond the scope of this paper and are best left to the discretion of the treating cardiologist or cardiac surgeon. Because of the potential for complications during percutaneous device removal, such as laceration of the superior vena cava or cardiac tamponade, the patient should be referred to a high-volume center where cardiothoracic intervention can be provided on an emergency basis if needed.
How long should antibiotic therapy go on?
An algorithm for deciding the duration of antibiotic therapy is shown in Figure 3. These guidelines, first published in 2007,17 were adopted by the American Heart Association in its updated statement on the management of CIED-related infections.5 However, it should be noted that these guidelines are not based on randomized clinical trials; rather, they represent expert opinion based on published series of patients with CIED-related infections.
In general, cases of device erosion or pocket infection can be treated with 1 to 2 weeks of appropriate antibiotic therapy based on antimicrobial susceptibility testing. However, cases of bloodstream infection require 2 to 4 weeks of antibiotic therapy—or sometimes even longer if associated complications are present, such as septic thrombosis, endocarditis, or osteomyelitis.
We favor parenteral antibiotics for the entire course of treatment. However, patients can be discharged from the hospital once the bloodstream infection has cleared, and the antibiotic course can be completed on an outpatient basis.
Outpatient antimicrobial monitoring
We recommend adherence to the Infectious Diseases Society of America’s guidelines for monitoring outpatient parenteral antimicrobial therapy.28
At discharge from the hospital, patients should be instructed to promptly call their primary care physician if they have a fever or notice inflammatory changes at the pocket site. If the patient reports such symptoms, repeat blood cultures should be ordered, and the patient should be monitored closely for signs of a relapse of infection.
A routine follow-up visit should be arranged at 2 weeks and at the end of parenteral antibiotic therapy (for patients receiving therapy for 4 weeks or longer) to make sure the infection has resolved.
When should a new device be implanted?
Before deciding when a new device should be implanted, one should carefully assess whether the patient still needs one. Studies indicate that up to 30% of patients may no longer require a cardiac device.17,18
However, we believe that removal of drains and closure of the old pocket are not necessary before implanting a new device in a different location (usually the contralateral pectoral area). Exceptions to this general principle are cases of valvular endocarditis, in which a minimum of 2 weeks is recommended between removal of an infected device (plus clearance of bloodstream infection) and implantation of a new device.
OUTCOMES OF INFECTION
Despite improvements in our understanding of how to manage CIED-related infection, the rates of morbidity and death remain significant.
The outcome, in part, depends on the clinical presentation and the patient’s comorbid conditions. In general, the death rate in patients with a pocket infection is less than 5%. However, in patients with endovascular infection, it may be as high as 20%.16–18 Other factors that affect the outcome include complications such as septic thrombosis, valvular endocarditis, or osteomyelitis; complications during device extraction; the need for open heart surgery; and the overall health of the patient.
Complete removal of the device system is a requisite for successful outcome, and the risk of death tends to be higher if only part of the infected CIED system is extracted.26
STRATEGIES TO PREVENT DEVICE INFECTION
Preventive efforts should focus on strategies to minimize the chances of contamination of the generator, leads, and pocket during implantation.29 Patients who are known to be colonized with methicillin-resistant S aureus may benefit from decolonization programs, which should include nasal application of mupirocin (Bactroban) ointment preoperatively.30 In addition, use of chlorhexidine for surgical-site antisepsis has been shown to reduce the risk of surgical site infection.31
Moreover, all patients should receive antibiotic prophylaxis before implantation of a CIED.32,33 Most institutions use a first-generation cephalosporin, such as cefazolin (Ancef), for this purpose.34 However, the increasing rate of methicillin resistance in staphylococci has led to the routine use of vancomycin for preoperative prophylaxis at some centers.18
Regardless of the antibiotic chosen for prophylaxis, protocols that ensure that all patients receive an appropriate antibiotic at the appropriate time are a key determinant in the success of these infection-control programs.
- Sohail MR, Wilson WR, Baddour LM. Infections of nonvalvular cardiovascular devices. In:Mandell GL, Bennett JE, Dolin R, editors. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. Philadelphia: Churchill Livingstone/Elsevier; 2010:1127–1142.
- Voigt A, Shalaby A, Saba S. Rising rates of cardiac rhythm management device infections in the United States: 1996 through 2003. J Am Coll Cardiol 2006; 48:590–591.
- Darouiche RO. Treatment of infections associated with surgical implants. N Engl J Med 2004; 350:1422–1429.
- Ferguson TB, Ferguson CL, Crites K, Crimmins-Reda P. The additional hospital costs generated in the management of complications of pacemaker and defibrillator implantations. J Thorac Cardiovasc Surg 1996; 111:742–751.
- Baddour LM, Epstein AE, Erickson CC, et al. Update on cardiovascular implantable electronic device infections and their management: a scientific statement from the American Heart Association. Circulation 2010; 121:458–477.
- Klug D, Balde M, Pavin D, et al; PEOPLE Study Group. Risk factors related to infections of implanted pacemakers and cardioverter-defibrillators: results of a large prospective study. Circulation 2007; 116:1349–1355.
- Sohail MR, Hussain S, Dib C, et al. Risk factor analysis of implantable cardioverter-defibrillator infections. Interscience Conference on Antimicrobial Agents and Chemotherapy (ICAAC). Boston, MA, Sept. 12–15, 2010.
- Lai KK, Fontecchio SA. Infections associated with implantable cardioverter defibrillators placed transvenously and via thoracotomies: epidemiology, infection control, and management. Clin Infect Dis 1998; 27:265–269.
- Mela T, McGovern BA, Garan H, et al. Long-term infection rates associated with the pectoral versus abdominal approach to cardioverter-defibrillator implants. Am J Cardiol 2001; 88:750–753.
- Al-Khatib SM, Lucas FL, Jollis JG, Malenka DJ, Wennberg DE. The relation between patients’ outcomes and the volume of cardioverter-defibrillator implantation procedures performed by physicians treating Medicare beneficiaries. J Am Coll Cardiol 2005; 46:1536–1540.
- Lekkerkerker JC, van Nieuwkoop C, Trines SA, et al. Risk factors and time delay associated with cardiac device infections: Leiden device registry. Heart 2009; 95:715–720.
- Sohail MR, Sultan OW, Raza SS. Contemporary management of cardiovascular implantable electronic device infections. Expert Rev Anti Infect Ther 2010; 8:831–839.
- Sohail MR, Uslan DZ, Khan AH, et al. Risk factor analysis of permanent pacemaker infection. Clin Infect Dis 2007; 45:166–173.
- Uslan DZ, Sohail MR, St Sauver JL, et al. Permanent pacemaker and implantable cardioverter defibrillator infection: a population-based study. Arch Intern Med 2007; 167:669–675.
- Bloom H, Heeke B, Leon A, et al. Renal insufficiency and the risk of infection from pacemaker or defibrillator surgery. Pacing Clin Electrophysiol 2006; 29:142–145.
- Le KY, Sohail MR, Friedman PA, et al for the Mayo Cardiovascular Infections Study Group. Clinical predictors of cardiovascular implantable electronic device-related infective endocarditis. Pacing Clin Electrophysiol2911; 34:450–459.
- Sohail MR, Uslan DZ, Khan AH, et al. Management and outcome of permanent pacemaker and implantable cardioverter-defibrillator infections. J Am Coll Cardiol 2007; 49:1851–1859.
- Tarakji KG, Chan EJ, Cantillon DJ, et al. Cardiac implantable electronic device infections: presentation, management, and patient outcomes. Heart Rhythm 2010; 7:1043–1047.
- Chamis AL, Peterson GE, Cabell CH, et al. Staphylococcus aureus bacteremia in patients with permanent pacemakers or implantable cardioverter-defibrillators. Circulation 2001; 104:1029–1033.
- Sohail MR, Uslan DZ, Khan AH, et al. Infective endocarditis complicating permanent pacemaker and implantable cardioverter-defibrillator infection. Mayo Clin Proc 2008; 83:46–53.
- Arber N, Pras E, Copperman Y, et al. Pacemaker endocarditis. Report of 44 cases and review of the literature. Medicine (Baltimore) 1994; 73:299–305.
- Sohail MR. Concerning diagnosis and management of pacemaker endocarditis [letter]. Pacing Clin Electrophysiol 2007; 30:829.
- Uslan DZ, Dowsley TF, Sohail MR, et al. Cardiovascular implantable electronic device infection in patients with Staphylococcus aureus bacteremia. Pacing Clin Electrophysiol 2009; 33:407–413.
- Madhavan M, Sohail MR, Friedman PA, et al. Outcomes in patients with cardiovascular implantable electronic devices and bacteremia due to Gram-positive cocci other than Staphylococcus aureus. Circ Arrhythm Electrophysiol 2010; 3:639–645.
- Uslan DZ, Sohail MR, Friedman PA, et al. Frequency of permanent pacemaker or implantable cardioverter-defibrillator infection in patients with gram-negative bacteremia. Clin Infect Dis 2006; 43:731–736.
- Margey R, McCann H, Blake G, et al. Contemporary management of and outcomes from cardiac device related infections. Europace 2010; 12:64–70.
- Baddour LM. Long-term suppressive antimicrobial therapy for intravascular device-related infections. Am J Med Sci 2001; 322:209–212.
- Tice AD, Rehm SJ, Dalovisio JR, et al. Practice guidelines for outpatient parenteral antimicrobial therapy. IDSA guidelines. Clin Infect Dis 2004; 38:1651–1672.
- Wenzel RP. Minimizing surgical-site infections. N Engl J Med 2010; 362:75–77.
- Bode LGM, Kluytmans JAJW, Wertheim HFL, et al. Preventing surgical-site infections in nasal carriers of Staphylococcus aureus. N Engl J Med 2010; 362:9–17.
- Darouiche RO, Wall MJ, Itani KMF, et al. Chlorhexidine-alcohol versus povidone-iodine for surgical-site antisepsis. N Engl J Med 2010; 362:18–26.
- Da Costa A, Kirkorian G, Cucherat M, et al. Antibiotic prophylaxis for permanent pacemaker implantation: a meta-analysis. Circulation 1998; 97:1796–1801.
- de Oliveira JC, Martinelli M, Nishioka SA, et al. Efficacy of antibiotic prophylaxis before the implantation of pacemakers and cardioverter-defibrillators: results of a large, prospective, randomized, doubleblinded, placebo-controlled trial. Circ Arrhythm Electrophysiol 2009; 2:29–34.
- Bertaglia E, Zerbo F, Zardo S, Barzan D, Zoppo F, Pascotto P. Antibiotic prophylaxis with a single dose of cefazolin during pacemaker implantation: incidence of long-term infective complications. Pacing Clin Electrophysiol 2006; 29:29–33.
These days, an increasing number of people are receiving permanent pacemakers, implantable cardioverter-defibrillators, endovascular devices, and cardiac resynchronization therapy devices—collectively called cardiovascular implantable electronic devices (CIEDs). One reason for this upswing is that these devices have been approved for more indications, such as sick sinus syndrome, third-degree heart block, atrial fibrillation, life-threatening ventricular arrhythmias, survival of sudden cardiac death, and advanced congestive heart failure. Another reason is that the population is getting older, and therefore more people need these devices.
Although the use of a CIED is associated with a lower risk of death and a better quality of life, CIED-related infection can eclipse some of these benefits for their recipients. Historically reported rates of infections range from 0% to 19.9%.1 However, recent data point to a disturbing trend: infection rates are rising faster than implantation rates.2
Besides causing morbidity and even death, infection is also associated with significant financial cost for patients and third-party payers. The estimated average cost of combined medical and surgical treatment of CIED-related infection ranges from $25,000 for permanent pacemakers to $50,000 for implantable cardioverter-defibrillators.3,4
Although cardiologists and cardiac surgeons are the ones who implant these devices, most patients receive their routine outpatient care from a primary care physician, who can be a general internist, a family physician, or other specialist. Moreover, many patients with device infection are admitted to hospital internal medicine services for various diagnoses requiring inpatient care. Therefore, an internist, a family physician, or a hospitalist may be the first physician to respond to a suspected or confirmed device infection. Knowledge of the clinical manifestations and the initial steps in evaluation and management is essential for optimal care.
These complex infections pose challenges, which we will illustrate by presenting a case of CIED-related infection and reviewing key elements of diagnosis and management.
AN ILLUSTRATIVE CASE
A 60-year-old man had a permanent pacemaker implanted 3 months ago because of third-degree heart block; he now presents to his primary care physician with increasing pain, swelling, and erythema at the site of his pacemaker pocket. He has a history of type 2 diabetes mellitus, stage 3 chronic kidney disease, and coronary artery disease.
The symptoms started 2 weeks ago and have slowly progressed, prompting him to seek medical care. He is quite anxious and wants to know if he needs to arrange an emergency consultation with his cardiologist.
IMPORTANT CLINICAL QUESTIONS
This presentation raises several important questions:
- What should be the next step in his evaluation?
- Which laboratory tests should be done?
- Should he be admitted to the hospital, or can he be managed as an outpatient?
- Should he be started empirically on antibiotics? If so, which antibiotics? Or is it better to wait?
- When should an infectious disease specialist be consulted?
- Should the device be removed, and if so, all of it or which components?
- How long should antibiotics be given?
We will provide evidence-based answers to these questions in the discussions below.
PATHOGENESIS AND RISK FACTORS FOR DEVICE INFECTION
The first step in understanding the clinical manifestations of CIED-related infections is to grasp their pathogenesis. Risk factors for device infection have been evaluated in several studies.1
Several factors interact in the inception and evolution of these infections, some related to the care in the perioperative period, some to the device, some to the host, and some to the causative microorganism.5 Although any one of these may play a predominant role in a given patient, most patients have a combination.
Perioperative factors that may contribute to a higher risk of infection include device revision; use of temporary pacing leads before placement of the permanent device; lack of antibiotic prophylaxis before implantation; longer operative time; operative inexperience; development of postoperative pocket hematoma; and factors such as diabetes mellitus and long-term use of corticosteroids and other immunosuppressive drugs that impair wound healing at the generator pocket.6–11
Device factors. Abdominal generator placement, use of epicardial leads, and complexity of the device play a significant role.6,12,13 In general, implantable cardioverter-defibrillators and cardiac resynchronization therapy devices have higher rates of infection than permanent pacemakers.2,14
Host factors. Diseases and conditions that predispose to bloodstream infection may result in hematogenous seeding of the device and its leads and are associated with a higher risk of late-onset infection. These include an implanted central venous catheter (for hemodialysis or other long-term access), a distant focus of primary infection (such as pneumonia and skin and soft-tissue infections), and invasive procedures unrelated to the CIED.10,15
In general, contamination at the time of surgery leads to early-onset infection (ie, within weeks to months of implantation), whereas hematogenous seeding is a predominant factor in most patients with late-onset infection.16
STAPHYLOCOCCI ARE THE MOST COMMON CAUSE
A key to making an accurate diagnosis and determining the appropriate empiric antibiotic therapy is to understand the microbiology of device infections.
Regardless of the clinical presentation, staphylococci are the predominant organisms responsible for both early- and late-onset infections.17,18 These include Staphylococcus aureus and coagulase-negative staphylococci. Depending on where the implanting hospital is located and where the organism was acquired (in the community or in the hospital), up to 50% of these staphylococci may be methicillin-resistant,17,18 a fact that necessitates using vancomycin for empiric coverage until the pathogen is identified and its susceptibility is known.
Gram-negative or polymicrobial CIED infections are infrequent. However, empiric gram-negative coverage should be considered for patients who present with systemic signs of infection, in whom delaying adequate coverage could jeopardize the successful outcome of infection treatment.
Fungal and mycobacterial infections of cardiac devices are exceedingly uncommon, mainly occurring in immunocompromised patients.
CLINICAL MANIFESTATIONS OF CARDIOVASCULAR DEVICE INFECTION
The clinical presentations of CIED-related infection can be broadly categorized into two groups: generator pocket infection and endovascular infection with an intact pocket.17,18
Generator pocket infection
Most patients with a pocket infection present with inflammatory changes at the device generator site. Usual signs and symptoms include pain, erythema, swelling, and serosanguinous or purulent drainage from the pocket.
Patients with a pocket infection generally present within weeks to months of implantation, as the predominant mechanism of pocket infection is contamination of the generator or leads during implantation. However, occasionally, pocket infection caused by indolent organisms such as Propionibacterium, Corynebacterium, and certain species of coagulase-negative staphylococci can present more than 1 year after implantation. Hematogenous seeding of the device pocket, as a result of bacteremia from a distant primary focus, is infrequent except in cases of S aureus bloodstream infection.19
Endovascular infection with an intact pocket
A subset of patients with CIED-related infections, mostly late-onset infections, present only with systemic signs and symptoms without inflammatory changes at the generator pocket.16–18 Most of these patients have multiple comorbid conditions and likely acquire the infection via hematogenous seeding of transvenous device leads from a distant focus of primary infection, such as a skin or soft-tissue infection, pneumonia, bacteremia arising from an implanted long-term central venous catheter, or bloodstream infection secondary to an invasive procedure unrelated to the CIED.
Most patients with an endovascular device infection have positive blood cultures at presentation. However, occasionally, blood cultures may be negative. The main reason for negative blood cultures in this setting is the use of empiric antibiotic therapy before blood cultures are drawn.
Endovascular device infections are further complicated by the formation of infected vegetations on the leads or cardiac valves in up to one-fourth of cases.16–18,20,21 This complication poses additional challenges in management, such as choosing the appropriate lead extraction technique, the waiting time before implanting a replacement device, and the optimal length of parenteral antimicrobial therapy. Many of these decisions are beyond the realm of internal medicine practice and are best managed by consultation with an infectious disease specialist and a cardiologist.
DIAGNOSIS OF INFECTION AND ASSOCIATED COMPLICATIONS
The clinical diagnosis of pocket infection is usually quite straightforward. However, occasionally, an early postoperative pocket hematoma can mimic pocket infection, and distinguishing these two may be difficult. Close collaboration between an internist, cardiologist, and infectious-disease specialist and careful observation of the patient may help to avoid a premature and incorrect diagnosis of pocket infection and unnecessary removal of the device in this scenario.
While diagnosing a pocket infection may be simple, an accurate and timely diagnosis of endovascular infection with an intact pocket can be challenging, especially if echocardiography shows no conclusive evidence of involvement of the device leads. Even when the infection is limited to the generator pocket, attempts to isolate causative pathogens may be hampered if empiric antibiotic therapy is started before culture samples are obtained from the pocket and from the blood.
Complete blood count with differential cell count.
Electrolyte and serum creatinine concentrations.
Inflammatory markers, including erythrocyte sedimentation rate and C-reactive protein concentration.
Swabs for bacterial cultures should be sent if there is purulent drainage from the generator pocket. This can be done in the office before referral to the emergency department or a tertiary care center for inpatient admission. If the pocket appears swollen or fluctuant, needle aspiration should be avoided, as it can introduce organisms and cause contamination.5
Two sets of peripheral blood cultures should be obtained. If the patient has an implanted central venous catheter, blood cultures via each catheter port should also be obtained, as they may help to pinpoint the source of bloodstream infection in cases in which blood culture results are positive.
TEE should also be performed in patients with systemic signs and symptoms (such as fever, chills, malaise, dyspnea, hypotension, or peripheral stigmata of endocarditis) or abnormal test results (leukocytosis, elevated inflammatory markers, or evidence of pulmonary emboli on imaging), even if blood cultures are negative. Similarly, TEE should also be considered in patients in whom blood cultures may be negative as a result of previous antimicrobial therapy.
If a decision is made to remove the device (see below), intraoperative pocket tissue and lead-tip cultures should be sent for Gram staining and bacterial culture. Fungal and mycobacterial cultures may be necessary in immunocompromised hosts, or if Gram staining and bacterial cultures from pocket tissue samples are negative. Caution must be exercised when interpreting the results of lead-tip cultures, as lead tips may become contaminated while being pulled through an infected pocket during removal.20,22
This approach should lead to an accurate diagnosis of CIED-related infection and associated complications in most patients. However, the diagnosis may remain elusive if results of blood cultures are positive but the pocket is intact and there is no echocardiographic evidence of lead or valve involvement. This is especially true in cases of S aureus bacteremia, in which positive blood cultures may be the sole manifestation of underlying device infection.19,23 Factors associated with higher odds of underlying device infection in this scenario include bacteremia lasting more than 24 hours, prosthetic valves, bacteremia within 3 months of device implantation, and no alternative focus of bacteremia.12
Evidence is emerging that underlying device infection should also be considered in patients with bloodstream infection with coagulase-negative staphylococci in the setting of an implanted device.24 On the other hand, seeding of device leads with gram-negative organisms is infrequent, and routine imaging of intracardiac leads is not necessary in cases of gram-negative bacteremia.25
In our opinion, cases of bacteremia in which underlying occult device infection is a concern are best managed by consultation with an infectious disease specialist.
A STEPWISE APPROACH TO MANAGING DEVICE INFECTION
Should antibiotics be started empirically?
The first step in managing CIED-related infection is to decide whether empiric antibiotic therapy should be started immediately once infection is suspected or if it is prudent to wait until the culture results are available.
In our opinion, if the infection is limited to the generator pocket, it is reasonable to wait until immediately before surgery to maximize the culture yield from pocket tissue samples. An exception to this rule is when systemic signs or symptoms are present, in which case delaying antibiotic therapy could jeopardize the outcome (FIGURE 2). In such cases, empiric antibiotic therapy can be started once two sets of peripheral blood samples for cultures have been obtained.
Which antibiotics should be given empirically?
Because gram-positive organisms, namely coagulase-negative staphylococci and S aureus, are the causative pathogens in most cases of CIED-related infection, empiric antibiotic therapy should provide adequate coverage for these organisms. Because methicillin resistance is quite prevalent in staphylococci, we routinely use vancomycin (Vancocin) for empiric coverage. In patients who are allergic to vancomycin or cannot tolerate it, daptomycin (Cubicin) is an alternative.
Empiric gram-negative coverage is generally reserved for patients who present with systemic signs and symptoms, in whom delaying adequate coverage could have untoward consequences. We routinely use cefepime (Maxipime) for empiric gram-negative coverage in our institution. Other beta-lactam agents that provide coverage for gram-negative bacilli, especially Pseudomonas, are also appropriate in this setting.
Should the device be removed?
Superficial infection of the wound or incision site (eg, stitch abscess) early after implantation can be managed by conservative antibiotic therapy without removing the device. However, complete removal of the device system, including intracardiac leads, is necessary in all other presentations of device infection, even if the infection appears limited to the generator pocket.5,12 Leaving the device in place or removing parts of the device is associated with persistent or relapsed infection and is not advisable.17,26
Leaving the device in place may be necessary in extenuating circumstances, eg, if surgery would be too risky for the patient or if the patient refuses device removal or has a short life expectancy. In these cases, lifelong suppressive antibiotic therapy should be prescribed after an initial course of parenteral antibiotics.27 Antibiotic choices for long-term suppressive therapy should be guided by antimicrobial susceptibility testing and consultation with an infectious disease specialist.
How should the leads be removed?
Leads are extracted percutaneously in most cases. Percutaneous extraction is generally considered safe even in cases in which infection is complicated by lead vegetations, which raises concern about pulmonary embolization of detached vegetation fragments during extraction.5,20
Thoracotomy is generally reserved for patients who have cardiac complications (such as a cardiac abscess or the need to replace cardiac valves) or in whom attempts to extract the leads percutaneously are unsuccessful.
Details of the removal procedure and choice of extraction technique are beyond the scope of this paper and are best left to the discretion of the treating cardiologist or cardiac surgeon. Because of the potential for complications during percutaneous device removal, such as laceration of the superior vena cava or cardiac tamponade, the patient should be referred to a high-volume center where cardiothoracic intervention can be provided on an emergency basis if needed.
How long should antibiotic therapy go on?
An algorithm for deciding the duration of antibiotic therapy is shown in Figure 3. These guidelines, first published in 2007,17 were adopted by the American Heart Association in its updated statement on the management of CIED-related infections.5 However, it should be noted that these guidelines are not based on randomized clinical trials; rather, they represent expert opinion based on published series of patients with CIED-related infections.
In general, cases of device erosion or pocket infection can be treated with 1 to 2 weeks of appropriate antibiotic therapy based on antimicrobial susceptibility testing. However, cases of bloodstream infection require 2 to 4 weeks of antibiotic therapy—or sometimes even longer if associated complications are present, such as septic thrombosis, endocarditis, or osteomyelitis.
We favor parenteral antibiotics for the entire course of treatment. However, patients can be discharged from the hospital once the bloodstream infection has cleared, and the antibiotic course can be completed on an outpatient basis.
Outpatient antimicrobial monitoring
We recommend adherence to the Infectious Diseases Society of America’s guidelines for monitoring outpatient parenteral antimicrobial therapy.28
At discharge from the hospital, patients should be instructed to promptly call their primary care physician if they have a fever or notice inflammatory changes at the pocket site. If the patient reports such symptoms, repeat blood cultures should be ordered, and the patient should be monitored closely for signs of a relapse of infection.
A routine follow-up visit should be arranged at 2 weeks and at the end of parenteral antibiotic therapy (for patients receiving therapy for 4 weeks or longer) to make sure the infection has resolved.
When should a new device be implanted?
Before deciding when a new device should be implanted, one should carefully assess whether the patient still needs one. Studies indicate that up to 30% of patients may no longer require a cardiac device.17,18
However, we believe that removal of drains and closure of the old pocket are not necessary before implanting a new device in a different location (usually the contralateral pectoral area). Exceptions to this general principle are cases of valvular endocarditis, in which a minimum of 2 weeks is recommended between removal of an infected device (plus clearance of bloodstream infection) and implantation of a new device.
OUTCOMES OF INFECTION
Despite improvements in our understanding of how to manage CIED-related infection, the rates of morbidity and death remain significant.
The outcome, in part, depends on the clinical presentation and the patient’s comorbid conditions. In general, the death rate in patients with a pocket infection is less than 5%. However, in patients with endovascular infection, it may be as high as 20%.16–18 Other factors that affect the outcome include complications such as septic thrombosis, valvular endocarditis, or osteomyelitis; complications during device extraction; the need for open heart surgery; and the overall health of the patient.
Complete removal of the device system is a requisite for successful outcome, and the risk of death tends to be higher if only part of the infected CIED system is extracted.26
STRATEGIES TO PREVENT DEVICE INFECTION
Preventive efforts should focus on strategies to minimize the chances of contamination of the generator, leads, and pocket during implantation.29 Patients who are known to be colonized with methicillin-resistant S aureus may benefit from decolonization programs, which should include nasal application of mupirocin (Bactroban) ointment preoperatively.30 In addition, use of chlorhexidine for surgical-site antisepsis has been shown to reduce the risk of surgical site infection.31
Moreover, all patients should receive antibiotic prophylaxis before implantation of a CIED.32,33 Most institutions use a first-generation cephalosporin, such as cefazolin (Ancef), for this purpose.34 However, the increasing rate of methicillin resistance in staphylococci has led to the routine use of vancomycin for preoperative prophylaxis at some centers.18
Regardless of the antibiotic chosen for prophylaxis, protocols that ensure that all patients receive an appropriate antibiotic at the appropriate time are a key determinant in the success of these infection-control programs.
These days, an increasing number of people are receiving permanent pacemakers, implantable cardioverter-defibrillators, endovascular devices, and cardiac resynchronization therapy devices—collectively called cardiovascular implantable electronic devices (CIEDs). One reason for this upswing is that these devices have been approved for more indications, such as sick sinus syndrome, third-degree heart block, atrial fibrillation, life-threatening ventricular arrhythmias, survival of sudden cardiac death, and advanced congestive heart failure. Another reason is that the population is getting older, and therefore more people need these devices.
Although the use of a CIED is associated with a lower risk of death and a better quality of life, CIED-related infection can eclipse some of these benefits for their recipients. Historically reported rates of infections range from 0% to 19.9%.1 However, recent data point to a disturbing trend: infection rates are rising faster than implantation rates.2
Besides causing morbidity and even death, infection is also associated with significant financial cost for patients and third-party payers. The estimated average cost of combined medical and surgical treatment of CIED-related infection ranges from $25,000 for permanent pacemakers to $50,000 for implantable cardioverter-defibrillators.3,4
Although cardiologists and cardiac surgeons are the ones who implant these devices, most patients receive their routine outpatient care from a primary care physician, who can be a general internist, a family physician, or other specialist. Moreover, many patients with device infection are admitted to hospital internal medicine services for various diagnoses requiring inpatient care. Therefore, an internist, a family physician, or a hospitalist may be the first physician to respond to a suspected or confirmed device infection. Knowledge of the clinical manifestations and the initial steps in evaluation and management is essential for optimal care.
These complex infections pose challenges, which we will illustrate by presenting a case of CIED-related infection and reviewing key elements of diagnosis and management.
AN ILLUSTRATIVE CASE
A 60-year-old man had a permanent pacemaker implanted 3 months ago because of third-degree heart block; he now presents to his primary care physician with increasing pain, swelling, and erythema at the site of his pacemaker pocket. He has a history of type 2 diabetes mellitus, stage 3 chronic kidney disease, and coronary artery disease.
The symptoms started 2 weeks ago and have slowly progressed, prompting him to seek medical care. He is quite anxious and wants to know if he needs to arrange an emergency consultation with his cardiologist.
IMPORTANT CLINICAL QUESTIONS
This presentation raises several important questions:
- What should be the next step in his evaluation?
- Which laboratory tests should be done?
- Should he be admitted to the hospital, or can he be managed as an outpatient?
- Should he be started empirically on antibiotics? If so, which antibiotics? Or is it better to wait?
- When should an infectious disease specialist be consulted?
- Should the device be removed, and if so, all of it or which components?
- How long should antibiotics be given?
We will provide evidence-based answers to these questions in the discussions below.
PATHOGENESIS AND RISK FACTORS FOR DEVICE INFECTION
The first step in understanding the clinical manifestations of CIED-related infections is to grasp their pathogenesis. Risk factors for device infection have been evaluated in several studies.1
Several factors interact in the inception and evolution of these infections, some related to the care in the perioperative period, some to the device, some to the host, and some to the causative microorganism.5 Although any one of these may play a predominant role in a given patient, most patients have a combination.
Perioperative factors that may contribute to a higher risk of infection include device revision; use of temporary pacing leads before placement of the permanent device; lack of antibiotic prophylaxis before implantation; longer operative time; operative inexperience; development of postoperative pocket hematoma; and factors such as diabetes mellitus and long-term use of corticosteroids and other immunosuppressive drugs that impair wound healing at the generator pocket.6–11
Device factors. Abdominal generator placement, use of epicardial leads, and complexity of the device play a significant role.6,12,13 In general, implantable cardioverter-defibrillators and cardiac resynchronization therapy devices have higher rates of infection than permanent pacemakers.2,14
Host factors. Diseases and conditions that predispose to bloodstream infection may result in hematogenous seeding of the device and its leads and are associated with a higher risk of late-onset infection. These include an implanted central venous catheter (for hemodialysis or other long-term access), a distant focus of primary infection (such as pneumonia and skin and soft-tissue infections), and invasive procedures unrelated to the CIED.10,15
In general, contamination at the time of surgery leads to early-onset infection (ie, within weeks to months of implantation), whereas hematogenous seeding is a predominant factor in most patients with late-onset infection.16
STAPHYLOCOCCI ARE THE MOST COMMON CAUSE
A key to making an accurate diagnosis and determining the appropriate empiric antibiotic therapy is to understand the microbiology of device infections.
Regardless of the clinical presentation, staphylococci are the predominant organisms responsible for both early- and late-onset infections.17,18 These include Staphylococcus aureus and coagulase-negative staphylococci. Depending on where the implanting hospital is located and where the organism was acquired (in the community or in the hospital), up to 50% of these staphylococci may be methicillin-resistant,17,18 a fact that necessitates using vancomycin for empiric coverage until the pathogen is identified and its susceptibility is known.
Gram-negative or polymicrobial CIED infections are infrequent. However, empiric gram-negative coverage should be considered for patients who present with systemic signs of infection, in whom delaying adequate coverage could jeopardize the successful outcome of infection treatment.
Fungal and mycobacterial infections of cardiac devices are exceedingly uncommon, mainly occurring in immunocompromised patients.
CLINICAL MANIFESTATIONS OF CARDIOVASCULAR DEVICE INFECTION
The clinical presentations of CIED-related infection can be broadly categorized into two groups: generator pocket infection and endovascular infection with an intact pocket.17,18
Generator pocket infection
Most patients with a pocket infection present with inflammatory changes at the device generator site. Usual signs and symptoms include pain, erythema, swelling, and serosanguinous or purulent drainage from the pocket.
Patients with a pocket infection generally present within weeks to months of implantation, as the predominant mechanism of pocket infection is contamination of the generator or leads during implantation. However, occasionally, pocket infection caused by indolent organisms such as Propionibacterium, Corynebacterium, and certain species of coagulase-negative staphylococci can present more than 1 year after implantation. Hematogenous seeding of the device pocket, as a result of bacteremia from a distant primary focus, is infrequent except in cases of S aureus bloodstream infection.19
Endovascular infection with an intact pocket
A subset of patients with CIED-related infections, mostly late-onset infections, present only with systemic signs and symptoms without inflammatory changes at the generator pocket.16–18 Most of these patients have multiple comorbid conditions and likely acquire the infection via hematogenous seeding of transvenous device leads from a distant focus of primary infection, such as a skin or soft-tissue infection, pneumonia, bacteremia arising from an implanted long-term central venous catheter, or bloodstream infection secondary to an invasive procedure unrelated to the CIED.
Most patients with an endovascular device infection have positive blood cultures at presentation. However, occasionally, blood cultures may be negative. The main reason for negative blood cultures in this setting is the use of empiric antibiotic therapy before blood cultures are drawn.
Endovascular device infections are further complicated by the formation of infected vegetations on the leads or cardiac valves in up to one-fourth of cases.16–18,20,21 This complication poses additional challenges in management, such as choosing the appropriate lead extraction technique, the waiting time before implanting a replacement device, and the optimal length of parenteral antimicrobial therapy. Many of these decisions are beyond the realm of internal medicine practice and are best managed by consultation with an infectious disease specialist and a cardiologist.
DIAGNOSIS OF INFECTION AND ASSOCIATED COMPLICATIONS
The clinical diagnosis of pocket infection is usually quite straightforward. However, occasionally, an early postoperative pocket hematoma can mimic pocket infection, and distinguishing these two may be difficult. Close collaboration between an internist, cardiologist, and infectious-disease specialist and careful observation of the patient may help to avoid a premature and incorrect diagnosis of pocket infection and unnecessary removal of the device in this scenario.
While diagnosing a pocket infection may be simple, an accurate and timely diagnosis of endovascular infection with an intact pocket can be challenging, especially if echocardiography shows no conclusive evidence of involvement of the device leads. Even when the infection is limited to the generator pocket, attempts to isolate causative pathogens may be hampered if empiric antibiotic therapy is started before culture samples are obtained from the pocket and from the blood.
Complete blood count with differential cell count.
Electrolyte and serum creatinine concentrations.
Inflammatory markers, including erythrocyte sedimentation rate and C-reactive protein concentration.
Swabs for bacterial cultures should be sent if there is purulent drainage from the generator pocket. This can be done in the office before referral to the emergency department or a tertiary care center for inpatient admission. If the pocket appears swollen or fluctuant, needle aspiration should be avoided, as it can introduce organisms and cause contamination.5
Two sets of peripheral blood cultures should be obtained. If the patient has an implanted central venous catheter, blood cultures via each catheter port should also be obtained, as they may help to pinpoint the source of bloodstream infection in cases in which blood culture results are positive.
TEE should also be performed in patients with systemic signs and symptoms (such as fever, chills, malaise, dyspnea, hypotension, or peripheral stigmata of endocarditis) or abnormal test results (leukocytosis, elevated inflammatory markers, or evidence of pulmonary emboli on imaging), even if blood cultures are negative. Similarly, TEE should also be considered in patients in whom blood cultures may be negative as a result of previous antimicrobial therapy.
If a decision is made to remove the device (see below), intraoperative pocket tissue and lead-tip cultures should be sent for Gram staining and bacterial culture. Fungal and mycobacterial cultures may be necessary in immunocompromised hosts, or if Gram staining and bacterial cultures from pocket tissue samples are negative. Caution must be exercised when interpreting the results of lead-tip cultures, as lead tips may become contaminated while being pulled through an infected pocket during removal.20,22
This approach should lead to an accurate diagnosis of CIED-related infection and associated complications in most patients. However, the diagnosis may remain elusive if results of blood cultures are positive but the pocket is intact and there is no echocardiographic evidence of lead or valve involvement. This is especially true in cases of S aureus bacteremia, in which positive blood cultures may be the sole manifestation of underlying device infection.19,23 Factors associated with higher odds of underlying device infection in this scenario include bacteremia lasting more than 24 hours, prosthetic valves, bacteremia within 3 months of device implantation, and no alternative focus of bacteremia.12
Evidence is emerging that underlying device infection should also be considered in patients with bloodstream infection with coagulase-negative staphylococci in the setting of an implanted device.24 On the other hand, seeding of device leads with gram-negative organisms is infrequent, and routine imaging of intracardiac leads is not necessary in cases of gram-negative bacteremia.25
In our opinion, cases of bacteremia in which underlying occult device infection is a concern are best managed by consultation with an infectious disease specialist.
A STEPWISE APPROACH TO MANAGING DEVICE INFECTION
Should antibiotics be started empirically?
The first step in managing CIED-related infection is to decide whether empiric antibiotic therapy should be started immediately once infection is suspected or if it is prudent to wait until the culture results are available.
In our opinion, if the infection is limited to the generator pocket, it is reasonable to wait until immediately before surgery to maximize the culture yield from pocket tissue samples. An exception to this rule is when systemic signs or symptoms are present, in which case delaying antibiotic therapy could jeopardize the outcome (FIGURE 2). In such cases, empiric antibiotic therapy can be started once two sets of peripheral blood samples for cultures have been obtained.
Which antibiotics should be given empirically?
Because gram-positive organisms, namely coagulase-negative staphylococci and S aureus, are the causative pathogens in most cases of CIED-related infection, empiric antibiotic therapy should provide adequate coverage for these organisms. Because methicillin resistance is quite prevalent in staphylococci, we routinely use vancomycin (Vancocin) for empiric coverage. In patients who are allergic to vancomycin or cannot tolerate it, daptomycin (Cubicin) is an alternative.
Empiric gram-negative coverage is generally reserved for patients who present with systemic signs and symptoms, in whom delaying adequate coverage could have untoward consequences. We routinely use cefepime (Maxipime) for empiric gram-negative coverage in our institution. Other beta-lactam agents that provide coverage for gram-negative bacilli, especially Pseudomonas, are also appropriate in this setting.
Should the device be removed?
Superficial infection of the wound or incision site (eg, stitch abscess) early after implantation can be managed by conservative antibiotic therapy without removing the device. However, complete removal of the device system, including intracardiac leads, is necessary in all other presentations of device infection, even if the infection appears limited to the generator pocket.5,12 Leaving the device in place or removing parts of the device is associated with persistent or relapsed infection and is not advisable.17,26
Leaving the device in place may be necessary in extenuating circumstances, eg, if surgery would be too risky for the patient or if the patient refuses device removal or has a short life expectancy. In these cases, lifelong suppressive antibiotic therapy should be prescribed after an initial course of parenteral antibiotics.27 Antibiotic choices for long-term suppressive therapy should be guided by antimicrobial susceptibility testing and consultation with an infectious disease specialist.
How should the leads be removed?
Leads are extracted percutaneously in most cases. Percutaneous extraction is generally considered safe even in cases in which infection is complicated by lead vegetations, which raises concern about pulmonary embolization of detached vegetation fragments during extraction.5,20
Thoracotomy is generally reserved for patients who have cardiac complications (such as a cardiac abscess or the need to replace cardiac valves) or in whom attempts to extract the leads percutaneously are unsuccessful.
Details of the removal procedure and choice of extraction technique are beyond the scope of this paper and are best left to the discretion of the treating cardiologist or cardiac surgeon. Because of the potential for complications during percutaneous device removal, such as laceration of the superior vena cava or cardiac tamponade, the patient should be referred to a high-volume center where cardiothoracic intervention can be provided on an emergency basis if needed.
How long should antibiotic therapy go on?
An algorithm for deciding the duration of antibiotic therapy is shown in Figure 3. These guidelines, first published in 2007,17 were adopted by the American Heart Association in its updated statement on the management of CIED-related infections.5 However, it should be noted that these guidelines are not based on randomized clinical trials; rather, they represent expert opinion based on published series of patients with CIED-related infections.
In general, cases of device erosion or pocket infection can be treated with 1 to 2 weeks of appropriate antibiotic therapy based on antimicrobial susceptibility testing. However, cases of bloodstream infection require 2 to 4 weeks of antibiotic therapy—or sometimes even longer if associated complications are present, such as septic thrombosis, endocarditis, or osteomyelitis.
We favor parenteral antibiotics for the entire course of treatment. However, patients can be discharged from the hospital once the bloodstream infection has cleared, and the antibiotic course can be completed on an outpatient basis.
Outpatient antimicrobial monitoring
We recommend adherence to the Infectious Diseases Society of America’s guidelines for monitoring outpatient parenteral antimicrobial therapy.28
At discharge from the hospital, patients should be instructed to promptly call their primary care physician if they have a fever or notice inflammatory changes at the pocket site. If the patient reports such symptoms, repeat blood cultures should be ordered, and the patient should be monitored closely for signs of a relapse of infection.
A routine follow-up visit should be arranged at 2 weeks and at the end of parenteral antibiotic therapy (for patients receiving therapy for 4 weeks or longer) to make sure the infection has resolved.
When should a new device be implanted?
Before deciding when a new device should be implanted, one should carefully assess whether the patient still needs one. Studies indicate that up to 30% of patients may no longer require a cardiac device.17,18
However, we believe that removal of drains and closure of the old pocket are not necessary before implanting a new device in a different location (usually the contralateral pectoral area). Exceptions to this general principle are cases of valvular endocarditis, in which a minimum of 2 weeks is recommended between removal of an infected device (plus clearance of bloodstream infection) and implantation of a new device.
OUTCOMES OF INFECTION
Despite improvements in our understanding of how to manage CIED-related infection, the rates of morbidity and death remain significant.
The outcome, in part, depends on the clinical presentation and the patient’s comorbid conditions. In general, the death rate in patients with a pocket infection is less than 5%. However, in patients with endovascular infection, it may be as high as 20%.16–18 Other factors that affect the outcome include complications such as septic thrombosis, valvular endocarditis, or osteomyelitis; complications during device extraction; the need for open heart surgery; and the overall health of the patient.
Complete removal of the device system is a requisite for successful outcome, and the risk of death tends to be higher if only part of the infected CIED system is extracted.26
STRATEGIES TO PREVENT DEVICE INFECTION
Preventive efforts should focus on strategies to minimize the chances of contamination of the generator, leads, and pocket during implantation.29 Patients who are known to be colonized with methicillin-resistant S aureus may benefit from decolonization programs, which should include nasal application of mupirocin (Bactroban) ointment preoperatively.30 In addition, use of chlorhexidine for surgical-site antisepsis has been shown to reduce the risk of surgical site infection.31
Moreover, all patients should receive antibiotic prophylaxis before implantation of a CIED.32,33 Most institutions use a first-generation cephalosporin, such as cefazolin (Ancef), for this purpose.34 However, the increasing rate of methicillin resistance in staphylococci has led to the routine use of vancomycin for preoperative prophylaxis at some centers.18
Regardless of the antibiotic chosen for prophylaxis, protocols that ensure that all patients receive an appropriate antibiotic at the appropriate time are a key determinant in the success of these infection-control programs.
- Sohail MR, Wilson WR, Baddour LM. Infections of nonvalvular cardiovascular devices. In:Mandell GL, Bennett JE, Dolin R, editors. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. Philadelphia: Churchill Livingstone/Elsevier; 2010:1127–1142.
- Voigt A, Shalaby A, Saba S. Rising rates of cardiac rhythm management device infections in the United States: 1996 through 2003. J Am Coll Cardiol 2006; 48:590–591.
- Darouiche RO. Treatment of infections associated with surgical implants. N Engl J Med 2004; 350:1422–1429.
- Ferguson TB, Ferguson CL, Crites K, Crimmins-Reda P. The additional hospital costs generated in the management of complications of pacemaker and defibrillator implantations. J Thorac Cardiovasc Surg 1996; 111:742–751.
- Baddour LM, Epstein AE, Erickson CC, et al. Update on cardiovascular implantable electronic device infections and their management: a scientific statement from the American Heart Association. Circulation 2010; 121:458–477.
- Klug D, Balde M, Pavin D, et al; PEOPLE Study Group. Risk factors related to infections of implanted pacemakers and cardioverter-defibrillators: results of a large prospective study. Circulation 2007; 116:1349–1355.
- Sohail MR, Hussain S, Dib C, et al. Risk factor analysis of implantable cardioverter-defibrillator infections. Interscience Conference on Antimicrobial Agents and Chemotherapy (ICAAC). Boston, MA, Sept. 12–15, 2010.
- Lai KK, Fontecchio SA. Infections associated with implantable cardioverter defibrillators placed transvenously and via thoracotomies: epidemiology, infection control, and management. Clin Infect Dis 1998; 27:265–269.
- Mela T, McGovern BA, Garan H, et al. Long-term infection rates associated with the pectoral versus abdominal approach to cardioverter-defibrillator implants. Am J Cardiol 2001; 88:750–753.
- Al-Khatib SM, Lucas FL, Jollis JG, Malenka DJ, Wennberg DE. The relation between patients’ outcomes and the volume of cardioverter-defibrillator implantation procedures performed by physicians treating Medicare beneficiaries. J Am Coll Cardiol 2005; 46:1536–1540.
- Lekkerkerker JC, van Nieuwkoop C, Trines SA, et al. Risk factors and time delay associated with cardiac device infections: Leiden device registry. Heart 2009; 95:715–720.
- Sohail MR, Sultan OW, Raza SS. Contemporary management of cardiovascular implantable electronic device infections. Expert Rev Anti Infect Ther 2010; 8:831–839.
- Sohail MR, Uslan DZ, Khan AH, et al. Risk factor analysis of permanent pacemaker infection. Clin Infect Dis 2007; 45:166–173.
- Uslan DZ, Sohail MR, St Sauver JL, et al. Permanent pacemaker and implantable cardioverter defibrillator infection: a population-based study. Arch Intern Med 2007; 167:669–675.
- Bloom H, Heeke B, Leon A, et al. Renal insufficiency and the risk of infection from pacemaker or defibrillator surgery. Pacing Clin Electrophysiol 2006; 29:142–145.
- Le KY, Sohail MR, Friedman PA, et al for the Mayo Cardiovascular Infections Study Group. Clinical predictors of cardiovascular implantable electronic device-related infective endocarditis. Pacing Clin Electrophysiol2911; 34:450–459.
- Sohail MR, Uslan DZ, Khan AH, et al. Management and outcome of permanent pacemaker and implantable cardioverter-defibrillator infections. J Am Coll Cardiol 2007; 49:1851–1859.
- Tarakji KG, Chan EJ, Cantillon DJ, et al. Cardiac implantable electronic device infections: presentation, management, and patient outcomes. Heart Rhythm 2010; 7:1043–1047.
- Chamis AL, Peterson GE, Cabell CH, et al. Staphylococcus aureus bacteremia in patients with permanent pacemakers or implantable cardioverter-defibrillators. Circulation 2001; 104:1029–1033.
- Sohail MR, Uslan DZ, Khan AH, et al. Infective endocarditis complicating permanent pacemaker and implantable cardioverter-defibrillator infection. Mayo Clin Proc 2008; 83:46–53.
- Arber N, Pras E, Copperman Y, et al. Pacemaker endocarditis. Report of 44 cases and review of the literature. Medicine (Baltimore) 1994; 73:299–305.
- Sohail MR. Concerning diagnosis and management of pacemaker endocarditis [letter]. Pacing Clin Electrophysiol 2007; 30:829.
- Uslan DZ, Dowsley TF, Sohail MR, et al. Cardiovascular implantable electronic device infection in patients with Staphylococcus aureus bacteremia. Pacing Clin Electrophysiol 2009; 33:407–413.
- Madhavan M, Sohail MR, Friedman PA, et al. Outcomes in patients with cardiovascular implantable electronic devices and bacteremia due to Gram-positive cocci other than Staphylococcus aureus. Circ Arrhythm Electrophysiol 2010; 3:639–645.
- Uslan DZ, Sohail MR, Friedman PA, et al. Frequency of permanent pacemaker or implantable cardioverter-defibrillator infection in patients with gram-negative bacteremia. Clin Infect Dis 2006; 43:731–736.
- Margey R, McCann H, Blake G, et al. Contemporary management of and outcomes from cardiac device related infections. Europace 2010; 12:64–70.
- Baddour LM. Long-term suppressive antimicrobial therapy for intravascular device-related infections. Am J Med Sci 2001; 322:209–212.
- Tice AD, Rehm SJ, Dalovisio JR, et al. Practice guidelines for outpatient parenteral antimicrobial therapy. IDSA guidelines. Clin Infect Dis 2004; 38:1651–1672.
- Wenzel RP. Minimizing surgical-site infections. N Engl J Med 2010; 362:75–77.
- Bode LGM, Kluytmans JAJW, Wertheim HFL, et al. Preventing surgical-site infections in nasal carriers of Staphylococcus aureus. N Engl J Med 2010; 362:9–17.
- Darouiche RO, Wall MJ, Itani KMF, et al. Chlorhexidine-alcohol versus povidone-iodine for surgical-site antisepsis. N Engl J Med 2010; 362:18–26.
- Da Costa A, Kirkorian G, Cucherat M, et al. Antibiotic prophylaxis for permanent pacemaker implantation: a meta-analysis. Circulation 1998; 97:1796–1801.
- de Oliveira JC, Martinelli M, Nishioka SA, et al. Efficacy of antibiotic prophylaxis before the implantation of pacemakers and cardioverter-defibrillators: results of a large, prospective, randomized, doubleblinded, placebo-controlled trial. Circ Arrhythm Electrophysiol 2009; 2:29–34.
- Bertaglia E, Zerbo F, Zardo S, Barzan D, Zoppo F, Pascotto P. Antibiotic prophylaxis with a single dose of cefazolin during pacemaker implantation: incidence of long-term infective complications. Pacing Clin Electrophysiol 2006; 29:29–33.
- Sohail MR, Wilson WR, Baddour LM. Infections of nonvalvular cardiovascular devices. In:Mandell GL, Bennett JE, Dolin R, editors. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. Philadelphia: Churchill Livingstone/Elsevier; 2010:1127–1142.
- Voigt A, Shalaby A, Saba S. Rising rates of cardiac rhythm management device infections in the United States: 1996 through 2003. J Am Coll Cardiol 2006; 48:590–591.
- Darouiche RO. Treatment of infections associated with surgical implants. N Engl J Med 2004; 350:1422–1429.
- Ferguson TB, Ferguson CL, Crites K, Crimmins-Reda P. The additional hospital costs generated in the management of complications of pacemaker and defibrillator implantations. J Thorac Cardiovasc Surg 1996; 111:742–751.
- Baddour LM, Epstein AE, Erickson CC, et al. Update on cardiovascular implantable electronic device infections and their management: a scientific statement from the American Heart Association. Circulation 2010; 121:458–477.
- Klug D, Balde M, Pavin D, et al; PEOPLE Study Group. Risk factors related to infections of implanted pacemakers and cardioverter-defibrillators: results of a large prospective study. Circulation 2007; 116:1349–1355.
- Sohail MR, Hussain S, Dib C, et al. Risk factor analysis of implantable cardioverter-defibrillator infections. Interscience Conference on Antimicrobial Agents and Chemotherapy (ICAAC). Boston, MA, Sept. 12–15, 2010.
- Lai KK, Fontecchio SA. Infections associated with implantable cardioverter defibrillators placed transvenously and via thoracotomies: epidemiology, infection control, and management. Clin Infect Dis 1998; 27:265–269.
- Mela T, McGovern BA, Garan H, et al. Long-term infection rates associated with the pectoral versus abdominal approach to cardioverter-defibrillator implants. Am J Cardiol 2001; 88:750–753.
- Al-Khatib SM, Lucas FL, Jollis JG, Malenka DJ, Wennberg DE. The relation between patients’ outcomes and the volume of cardioverter-defibrillator implantation procedures performed by physicians treating Medicare beneficiaries. J Am Coll Cardiol 2005; 46:1536–1540.
- Lekkerkerker JC, van Nieuwkoop C, Trines SA, et al. Risk factors and time delay associated with cardiac device infections: Leiden device registry. Heart 2009; 95:715–720.
- Sohail MR, Sultan OW, Raza SS. Contemporary management of cardiovascular implantable electronic device infections. Expert Rev Anti Infect Ther 2010; 8:831–839.
- Sohail MR, Uslan DZ, Khan AH, et al. Risk factor analysis of permanent pacemaker infection. Clin Infect Dis 2007; 45:166–173.
- Uslan DZ, Sohail MR, St Sauver JL, et al. Permanent pacemaker and implantable cardioverter defibrillator infection: a population-based study. Arch Intern Med 2007; 167:669–675.
- Bloom H, Heeke B, Leon A, et al. Renal insufficiency and the risk of infection from pacemaker or defibrillator surgery. Pacing Clin Electrophysiol 2006; 29:142–145.
- Le KY, Sohail MR, Friedman PA, et al for the Mayo Cardiovascular Infections Study Group. Clinical predictors of cardiovascular implantable electronic device-related infective endocarditis. Pacing Clin Electrophysiol2911; 34:450–459.
- Sohail MR, Uslan DZ, Khan AH, et al. Management and outcome of permanent pacemaker and implantable cardioverter-defibrillator infections. J Am Coll Cardiol 2007; 49:1851–1859.
- Tarakji KG, Chan EJ, Cantillon DJ, et al. Cardiac implantable electronic device infections: presentation, management, and patient outcomes. Heart Rhythm 2010; 7:1043–1047.
- Chamis AL, Peterson GE, Cabell CH, et al. Staphylococcus aureus bacteremia in patients with permanent pacemakers or implantable cardioverter-defibrillators. Circulation 2001; 104:1029–1033.
- Sohail MR, Uslan DZ, Khan AH, et al. Infective endocarditis complicating permanent pacemaker and implantable cardioverter-defibrillator infection. Mayo Clin Proc 2008; 83:46–53.
- Arber N, Pras E, Copperman Y, et al. Pacemaker endocarditis. Report of 44 cases and review of the literature. Medicine (Baltimore) 1994; 73:299–305.
- Sohail MR. Concerning diagnosis and management of pacemaker endocarditis [letter]. Pacing Clin Electrophysiol 2007; 30:829.
- Uslan DZ, Dowsley TF, Sohail MR, et al. Cardiovascular implantable electronic device infection in patients with Staphylococcus aureus bacteremia. Pacing Clin Electrophysiol 2009; 33:407–413.
- Madhavan M, Sohail MR, Friedman PA, et al. Outcomes in patients with cardiovascular implantable electronic devices and bacteremia due to Gram-positive cocci other than Staphylococcus aureus. Circ Arrhythm Electrophysiol 2010; 3:639–645.
- Uslan DZ, Sohail MR, Friedman PA, et al. Frequency of permanent pacemaker or implantable cardioverter-defibrillator infection in patients with gram-negative bacteremia. Clin Infect Dis 2006; 43:731–736.
- Margey R, McCann H, Blake G, et al. Contemporary management of and outcomes from cardiac device related infections. Europace 2010; 12:64–70.
- Baddour LM. Long-term suppressive antimicrobial therapy for intravascular device-related infections. Am J Med Sci 2001; 322:209–212.
- Tice AD, Rehm SJ, Dalovisio JR, et al. Practice guidelines for outpatient parenteral antimicrobial therapy. IDSA guidelines. Clin Infect Dis 2004; 38:1651–1672.
- Wenzel RP. Minimizing surgical-site infections. N Engl J Med 2010; 362:75–77.
- Bode LGM, Kluytmans JAJW, Wertheim HFL, et al. Preventing surgical-site infections in nasal carriers of Staphylococcus aureus. N Engl J Med 2010; 362:9–17.
- Darouiche RO, Wall MJ, Itani KMF, et al. Chlorhexidine-alcohol versus povidone-iodine for surgical-site antisepsis. N Engl J Med 2010; 362:18–26.
- Da Costa A, Kirkorian G, Cucherat M, et al. Antibiotic prophylaxis for permanent pacemaker implantation: a meta-analysis. Circulation 1998; 97:1796–1801.
- de Oliveira JC, Martinelli M, Nishioka SA, et al. Efficacy of antibiotic prophylaxis before the implantation of pacemakers and cardioverter-defibrillators: results of a large, prospective, randomized, doubleblinded, placebo-controlled trial. Circ Arrhythm Electrophysiol 2009; 2:29–34.
- Bertaglia E, Zerbo F, Zardo S, Barzan D, Zoppo F, Pascotto P. Antibiotic prophylaxis with a single dose of cefazolin during pacemaker implantation: incidence of long-term infective complications. Pacing Clin Electrophysiol 2006; 29:29–33.
KEY POINTS
- Although inflammatory signs at the generator pocket are the most common presentation of an infection occurring soon after the device is implanted, positive blood cultures may be the sole manifestation of a late-onset endovascular infection.
- Staphylococci are the most common pathogens in both pocket infections and endovascular infections.
- Two sets of blood cultures should be obtained in all patients suspected of having a cardiac device infection.
- Transesophageal echocardiography should be ordered in all patients with suspected cardiac device infection who have positive blood cultures, as it can identify intracardiac complications of infection and assess for evidence of cardiac valve involvement.
Out of Morpheus’ embrace
Much of the data are cross-sectional and epidemiologic, so the direction of causation (if causation exists) cannot be established with certainty. There is a host of interwoven confounders, and many of these intersect around the patient’s weight and the presence of sleep apnea. Nevertheless, the authors explore some provocative associations.
Over the years, clinicians have increasingly recognized the myriad of comorbidities that accompany sleep apnea. We have discussed this in the Journal on several occasions since 2005. Naïvely, I have attributed many of these, particularly the cardiac complications, to downstream effects of repetitive hypoxic and hypercarbic insults, but there may be more fundamental physiologic principles in play, some linked to the affected sleep cycle and not to the apnea.
Drs. Touma and Pannain discuss some of the physiologic consequences of altered or decreased sleep cycles. Some of these are a result of disrupting the circadian release of hormones such as glucocorticoids and growth hormone, both of which can influence the body’s sensitivity to insulin’s hypoglycemic effects. The same can be said for disruption of normal sympathetic-parasympathetic nerve flow. In addition, sleep disruption affects appetite. Thinking back to residency, I recall the need to follow the admonition of one of my peers: in order to survive nights on call, never miss a meal. I still remember the (leptin-linked?) cravings after being up all night for a heavy carbohydrate-laden breakfast. Given these effects, coupled with the fatigue of sleep deprivation resulting in decreased exercise, it is easy to construct innumerable positive feedback loops contributing to the development of insulin resistance and type 2 diabetes.
So while it is a truism that sleep is good and that we all need to “recharge our batteries,” we still lack a full understanding of the complex physiology of sleep and the effects of sleep deprivation on a number of clinical conditions, from diabetes to fibromyalgia.
Recognizing the associations is a beginning. Knowing what to do about defective sleep in terms of preventing or ameliorating disease awaits appropriately controlled interventional trials—and the definition of appropriate interventions to evaluate.
Much of the data are cross-sectional and epidemiologic, so the direction of causation (if causation exists) cannot be established with certainty. There is a host of interwoven confounders, and many of these intersect around the patient’s weight and the presence of sleep apnea. Nevertheless, the authors explore some provocative associations.
Over the years, clinicians have increasingly recognized the myriad of comorbidities that accompany sleep apnea. We have discussed this in the Journal on several occasions since 2005. Naïvely, I have attributed many of these, particularly the cardiac complications, to downstream effects of repetitive hypoxic and hypercarbic insults, but there may be more fundamental physiologic principles in play, some linked to the affected sleep cycle and not to the apnea.
Drs. Touma and Pannain discuss some of the physiologic consequences of altered or decreased sleep cycles. Some of these are a result of disrupting the circadian release of hormones such as glucocorticoids and growth hormone, both of which can influence the body’s sensitivity to insulin’s hypoglycemic effects. The same can be said for disruption of normal sympathetic-parasympathetic nerve flow. In addition, sleep disruption affects appetite. Thinking back to residency, I recall the need to follow the admonition of one of my peers: in order to survive nights on call, never miss a meal. I still remember the (leptin-linked?) cravings after being up all night for a heavy carbohydrate-laden breakfast. Given these effects, coupled with the fatigue of sleep deprivation resulting in decreased exercise, it is easy to construct innumerable positive feedback loops contributing to the development of insulin resistance and type 2 diabetes.
So while it is a truism that sleep is good and that we all need to “recharge our batteries,” we still lack a full understanding of the complex physiology of sleep and the effects of sleep deprivation on a number of clinical conditions, from diabetes to fibromyalgia.
Recognizing the associations is a beginning. Knowing what to do about defective sleep in terms of preventing or ameliorating disease awaits appropriately controlled interventional trials—and the definition of appropriate interventions to evaluate.
Much of the data are cross-sectional and epidemiologic, so the direction of causation (if causation exists) cannot be established with certainty. There is a host of interwoven confounders, and many of these intersect around the patient’s weight and the presence of sleep apnea. Nevertheless, the authors explore some provocative associations.
Over the years, clinicians have increasingly recognized the myriad of comorbidities that accompany sleep apnea. We have discussed this in the Journal on several occasions since 2005. Naïvely, I have attributed many of these, particularly the cardiac complications, to downstream effects of repetitive hypoxic and hypercarbic insults, but there may be more fundamental physiologic principles in play, some linked to the affected sleep cycle and not to the apnea.
Drs. Touma and Pannain discuss some of the physiologic consequences of altered or decreased sleep cycles. Some of these are a result of disrupting the circadian release of hormones such as glucocorticoids and growth hormone, both of which can influence the body’s sensitivity to insulin’s hypoglycemic effects. The same can be said for disruption of normal sympathetic-parasympathetic nerve flow. In addition, sleep disruption affects appetite. Thinking back to residency, I recall the need to follow the admonition of one of my peers: in order to survive nights on call, never miss a meal. I still remember the (leptin-linked?) cravings after being up all night for a heavy carbohydrate-laden breakfast. Given these effects, coupled with the fatigue of sleep deprivation resulting in decreased exercise, it is easy to construct innumerable positive feedback loops contributing to the development of insulin resistance and type 2 diabetes.
So while it is a truism that sleep is good and that we all need to “recharge our batteries,” we still lack a full understanding of the complex physiology of sleep and the effects of sleep deprivation on a number of clinical conditions, from diabetes to fibromyalgia.
Recognizing the associations is a beginning. Knowing what to do about defective sleep in terms of preventing or ameliorating disease awaits appropriately controlled interventional trials—and the definition of appropriate interventions to evaluate.