User login
Kidneys have a lot of nerve
Wearing my rheumatologist hat, I know that patients are not sent to me for management of their hypertension. Certainly, I play an active role in dictating aggressive blood pressure control in patients with renal vasculitis and lupus nephritis as an integral part of their therapy, and conversely, I contribute to the difficulty in controlling blood pressures of those relatively few patients to whom I recommend full-dose nonsteroidal anti-inflammatory drugs. But for the most part, I am an (occasionally silent) voyeur, observing the blood pressure management of patients who are managed by others.
It is striking how many patients show up in my office with blood pressures outside the range advocated by current guidelines. Some pressures “normalize” when I recheck them after quiet conversation, sometimes using a larger, more appropriately sized cuff. But most do not.
Many explanations are offered. The usual is that their pressure is “just up in the doctor’s office” (when else are they carefully checked?), but few of these patients have undergone 24-hour ambulatory monitoring to diagnose “white coat hypertension” or to assess whether a normal physiologic pattern of nocturnal “dipping” is present. Some are already taking one or more antihypertensive drugs, yet their blood pressure is above the recommended target. Infrequently are the drugs pushed to their maximally tolerated dose.
From my practice experience, it seems that most patients with imperfectly controlled blood pressure do not fit the definition of resistant hypertension (inadequate response to three appropriate drugs in maximally tolerated doses). But resistant hypertension is also a problem affecting many patients and is in need of a solution.
In this issue, Thomas et al describe a novel approach undergoing clinical testing—catheter-based renal denervation. Early results are encouraging. But hypertension is a heterogeneous condition, and in a physiologically based therapy, the underlying pathophysiology may dictate the response and side effects of denervation in specific patients.
A recent study showed that denervation was effective in a few patients with chronic kidney disease, normalizing nocturnal dipping without further reducing renal function.1 But careful attention will need to be focused on patients who are likely reliant on interorgan neural communication. What will be the systemic effect if a patient who has undergone renal denervation develops severe cirrhosis and is in need of hepatorenal reflexes, or if a treated patient develops new severe congestive heart failure or sleep apnea? As appropriately stated in this issue by Thomas et al and by Bhatt, some optimism for the promise of this technique is justifiable, but we really will need studies large enough to include appropriate subsets for the analysis of both safety and efficacy.
- Hering D, Mahfoud F, Walton AS, et al. Renal denervation in moderate to severe CKD. J Am Soc Nephrol 2012[Epub ahead of print].
Wearing my rheumatologist hat, I know that patients are not sent to me for management of their hypertension. Certainly, I play an active role in dictating aggressive blood pressure control in patients with renal vasculitis and lupus nephritis as an integral part of their therapy, and conversely, I contribute to the difficulty in controlling blood pressures of those relatively few patients to whom I recommend full-dose nonsteroidal anti-inflammatory drugs. But for the most part, I am an (occasionally silent) voyeur, observing the blood pressure management of patients who are managed by others.
It is striking how many patients show up in my office with blood pressures outside the range advocated by current guidelines. Some pressures “normalize” when I recheck them after quiet conversation, sometimes using a larger, more appropriately sized cuff. But most do not.
Many explanations are offered. The usual is that their pressure is “just up in the doctor’s office” (when else are they carefully checked?), but few of these patients have undergone 24-hour ambulatory monitoring to diagnose “white coat hypertension” or to assess whether a normal physiologic pattern of nocturnal “dipping” is present. Some are already taking one or more antihypertensive drugs, yet their blood pressure is above the recommended target. Infrequently are the drugs pushed to their maximally tolerated dose.
From my practice experience, it seems that most patients with imperfectly controlled blood pressure do not fit the definition of resistant hypertension (inadequate response to three appropriate drugs in maximally tolerated doses). But resistant hypertension is also a problem affecting many patients and is in need of a solution.
In this issue, Thomas et al describe a novel approach undergoing clinical testing—catheter-based renal denervation. Early results are encouraging. But hypertension is a heterogeneous condition, and in a physiologically based therapy, the underlying pathophysiology may dictate the response and side effects of denervation in specific patients.
A recent study showed that denervation was effective in a few patients with chronic kidney disease, normalizing nocturnal dipping without further reducing renal function.1 But careful attention will need to be focused on patients who are likely reliant on interorgan neural communication. What will be the systemic effect if a patient who has undergone renal denervation develops severe cirrhosis and is in need of hepatorenal reflexes, or if a treated patient develops new severe congestive heart failure or sleep apnea? As appropriately stated in this issue by Thomas et al and by Bhatt, some optimism for the promise of this technique is justifiable, but we really will need studies large enough to include appropriate subsets for the analysis of both safety and efficacy.
Wearing my rheumatologist hat, I know that patients are not sent to me for management of their hypertension. Certainly, I play an active role in dictating aggressive blood pressure control in patients with renal vasculitis and lupus nephritis as an integral part of their therapy, and conversely, I contribute to the difficulty in controlling blood pressures of those relatively few patients to whom I recommend full-dose nonsteroidal anti-inflammatory drugs. But for the most part, I am an (occasionally silent) voyeur, observing the blood pressure management of patients who are managed by others.
It is striking how many patients show up in my office with blood pressures outside the range advocated by current guidelines. Some pressures “normalize” when I recheck them after quiet conversation, sometimes using a larger, more appropriately sized cuff. But most do not.
Many explanations are offered. The usual is that their pressure is “just up in the doctor’s office” (when else are they carefully checked?), but few of these patients have undergone 24-hour ambulatory monitoring to diagnose “white coat hypertension” or to assess whether a normal physiologic pattern of nocturnal “dipping” is present. Some are already taking one or more antihypertensive drugs, yet their blood pressure is above the recommended target. Infrequently are the drugs pushed to their maximally tolerated dose.
From my practice experience, it seems that most patients with imperfectly controlled blood pressure do not fit the definition of resistant hypertension (inadequate response to three appropriate drugs in maximally tolerated doses). But resistant hypertension is also a problem affecting many patients and is in need of a solution.
In this issue, Thomas et al describe a novel approach undergoing clinical testing—catheter-based renal denervation. Early results are encouraging. But hypertension is a heterogeneous condition, and in a physiologically based therapy, the underlying pathophysiology may dictate the response and side effects of denervation in specific patients.
A recent study showed that denervation was effective in a few patients with chronic kidney disease, normalizing nocturnal dipping without further reducing renal function.1 But careful attention will need to be focused on patients who are likely reliant on interorgan neural communication. What will be the systemic effect if a patient who has undergone renal denervation develops severe cirrhosis and is in need of hepatorenal reflexes, or if a treated patient develops new severe congestive heart failure or sleep apnea? As appropriately stated in this issue by Thomas et al and by Bhatt, some optimism for the promise of this technique is justifiable, but we really will need studies large enough to include appropriate subsets for the analysis of both safety and efficacy.
- Hering D, Mahfoud F, Walton AS, et al. Renal denervation in moderate to severe CKD. J Am Soc Nephrol 2012[Epub ahead of print].
- Hering D, Mahfoud F, Walton AS, et al. Renal denervation in moderate to severe CKD. J Am Soc Nephrol 2012[Epub ahead of print].
Fire, skin, and fat: Inflammation, psoriasis, and cardiovascular disease
Perhaps 3% of the population has psoriasis. Thus, it is impossible to practice any aspect of internal medicine without encountering patients with this disease.
In this issue of the Journal, Dr. Jennifer Villaseñor-Park and her colleagues discuss the clinical patterns and management of psoriasis and the links between psoriasis and cardiovascular disease—links that should bind the internist and dermatologist in a shared mission of comanagement.
The connection between inflammation and atherosclerosis is now well known. Many of the same cellular and biochemical players have active roles in the inflammation of rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and atherosclerosis. The observation that patients with inflammatory diseases have a higher prevalence of cardiovascular disease seems to strengthen this apparent link and supports the concept that drugs used to treat inflammation in the joints and skin might also reduce the burden of cardiovascular disease.
But addressing this risk is not so straightforward. Since the increased cardiovascular risk in rheumatoid arthritis and systemic lupus erythematosus is not completely explained by traditional risk factors, research is ongoing to identify the potential mechanisms of this risk, such as high-density lipoprotein particles modified by inflammation and high circulating levels of interferon, both of which may be atherogenic. It remains to be seen whether these and other potential nonclassic mediators of atherosclerosis can be targeted and cardiovascular events reduced.
But psoriasis is a little different. Compared with patients with rheumatoid arthritis and lupus (if they have not been affected by corticosteroid treatment), patients with psoriasis tend to be heavier and to have a higher prevalence of fatty liver disease and the metabolic syndrome. A debate continues as to whether psoriasis per se is a unique risk factor for cardiovascular disease or whether in fact these comorbidities constitute the major risk for cardiovascular events in patients with psoriasis.
The epidemiologists can continue to crunch the data in attempts to attribute the relative risks of poor outcome. But in the office, we should be vigilant and, in patients with psoriasis, should not ignore the traditional cardiovascular risk factors included in the metabolic syndrome, which is more prevalent in these patients.
Perhaps 3% of the population has psoriasis. Thus, it is impossible to practice any aspect of internal medicine without encountering patients with this disease.
In this issue of the Journal, Dr. Jennifer Villaseñor-Park and her colleagues discuss the clinical patterns and management of psoriasis and the links between psoriasis and cardiovascular disease—links that should bind the internist and dermatologist in a shared mission of comanagement.
The connection between inflammation and atherosclerosis is now well known. Many of the same cellular and biochemical players have active roles in the inflammation of rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and atherosclerosis. The observation that patients with inflammatory diseases have a higher prevalence of cardiovascular disease seems to strengthen this apparent link and supports the concept that drugs used to treat inflammation in the joints and skin might also reduce the burden of cardiovascular disease.
But addressing this risk is not so straightforward. Since the increased cardiovascular risk in rheumatoid arthritis and systemic lupus erythematosus is not completely explained by traditional risk factors, research is ongoing to identify the potential mechanisms of this risk, such as high-density lipoprotein particles modified by inflammation and high circulating levels of interferon, both of which may be atherogenic. It remains to be seen whether these and other potential nonclassic mediators of atherosclerosis can be targeted and cardiovascular events reduced.
But psoriasis is a little different. Compared with patients with rheumatoid arthritis and lupus (if they have not been affected by corticosteroid treatment), patients with psoriasis tend to be heavier and to have a higher prevalence of fatty liver disease and the metabolic syndrome. A debate continues as to whether psoriasis per se is a unique risk factor for cardiovascular disease or whether in fact these comorbidities constitute the major risk for cardiovascular events in patients with psoriasis.
The epidemiologists can continue to crunch the data in attempts to attribute the relative risks of poor outcome. But in the office, we should be vigilant and, in patients with psoriasis, should not ignore the traditional cardiovascular risk factors included in the metabolic syndrome, which is more prevalent in these patients.
Perhaps 3% of the population has psoriasis. Thus, it is impossible to practice any aspect of internal medicine without encountering patients with this disease.
In this issue of the Journal, Dr. Jennifer Villaseñor-Park and her colleagues discuss the clinical patterns and management of psoriasis and the links between psoriasis and cardiovascular disease—links that should bind the internist and dermatologist in a shared mission of comanagement.
The connection between inflammation and atherosclerosis is now well known. Many of the same cellular and biochemical players have active roles in the inflammation of rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and atherosclerosis. The observation that patients with inflammatory diseases have a higher prevalence of cardiovascular disease seems to strengthen this apparent link and supports the concept that drugs used to treat inflammation in the joints and skin might also reduce the burden of cardiovascular disease.
But addressing this risk is not so straightforward. Since the increased cardiovascular risk in rheumatoid arthritis and systemic lupus erythematosus is not completely explained by traditional risk factors, research is ongoing to identify the potential mechanisms of this risk, such as high-density lipoprotein particles modified by inflammation and high circulating levels of interferon, both of which may be atherogenic. It remains to be seen whether these and other potential nonclassic mediators of atherosclerosis can be targeted and cardiovascular events reduced.
But psoriasis is a little different. Compared with patients with rheumatoid arthritis and lupus (if they have not been affected by corticosteroid treatment), patients with psoriasis tend to be heavier and to have a higher prevalence of fatty liver disease and the metabolic syndrome. A debate continues as to whether psoriasis per se is a unique risk factor for cardiovascular disease or whether in fact these comorbidities constitute the major risk for cardiovascular events in patients with psoriasis.
The epidemiologists can continue to crunch the data in attempts to attribute the relative risks of poor outcome. But in the office, we should be vigilant and, in patients with psoriasis, should not ignore the traditional cardiovascular risk factors included in the metabolic syndrome, which is more prevalent in these patients.
Lung cancer screening: One step forward
Screening seems to be such an easy concept: look for cancer before it is symptomatic, find it at an early stage, and treat it. We should be more able to cure cancer if it is found during screening, or at least to significantly prolong the patient’s survival by slowing the cancer’s growth and metastasis. But exactly which screening strategies save lives (and what level of efficacy is cost-effective and risk-acceptable to society and individuals) has turned out to be difficult to prove in clinical trials.
For screening to be efficacious, the test must be able to detect cancer at a stage at which early treatment makes a difference. Herein lie two challenges. A person with a cancer that grows so slowly that early treatment may not make a survival difference will not benefit from screening, and neither will someone with cancer that is so aggressive that early treatment will not significantly slow its malignant outcome. The first scenario is called “overdiagnosis”—a diagnosis made during screening that may not affect the prognosis but can lead to significant anxiety as well as additional testing and treatments, with associated costs. This has yet to be fully addressed in lung cancer screening using repeated CT imaging, but it has been discussed in breast and prostate screening.
Other challenges include how individual physicians will implement a successful lung screening program, which is more complex than yearly mammography, requiring consecutive yearly CT screening with tracking of specific results and incidental findings. How will screening be limited to appropriate patients, as dictated by trial results? Will CT review be as successful in the community as it was in trial centers of excellence? Since smoking (an act of personal choice) is the major risk factor that warrants screening, who should bear the cost?
Then there are potential unintended consequences. What if lung cancer screening makes current smokers more complacent about continuing to smoke? We must increase our educational efforts on smoking cessation, efforts that I sense are having a disappointingly limited impact on the younger generation.
Screening seems to be such an easy concept: look for cancer before it is symptomatic, find it at an early stage, and treat it. We should be more able to cure cancer if it is found during screening, or at least to significantly prolong the patient’s survival by slowing the cancer’s growth and metastasis. But exactly which screening strategies save lives (and what level of efficacy is cost-effective and risk-acceptable to society and individuals) has turned out to be difficult to prove in clinical trials.
For screening to be efficacious, the test must be able to detect cancer at a stage at which early treatment makes a difference. Herein lie two challenges. A person with a cancer that grows so slowly that early treatment may not make a survival difference will not benefit from screening, and neither will someone with cancer that is so aggressive that early treatment will not significantly slow its malignant outcome. The first scenario is called “overdiagnosis”—a diagnosis made during screening that may not affect the prognosis but can lead to significant anxiety as well as additional testing and treatments, with associated costs. This has yet to be fully addressed in lung cancer screening using repeated CT imaging, but it has been discussed in breast and prostate screening.
Other challenges include how individual physicians will implement a successful lung screening program, which is more complex than yearly mammography, requiring consecutive yearly CT screening with tracking of specific results and incidental findings. How will screening be limited to appropriate patients, as dictated by trial results? Will CT review be as successful in the community as it was in trial centers of excellence? Since smoking (an act of personal choice) is the major risk factor that warrants screening, who should bear the cost?
Then there are potential unintended consequences. What if lung cancer screening makes current smokers more complacent about continuing to smoke? We must increase our educational efforts on smoking cessation, efforts that I sense are having a disappointingly limited impact on the younger generation.
Screening seems to be such an easy concept: look for cancer before it is symptomatic, find it at an early stage, and treat it. We should be more able to cure cancer if it is found during screening, or at least to significantly prolong the patient’s survival by slowing the cancer’s growth and metastasis. But exactly which screening strategies save lives (and what level of efficacy is cost-effective and risk-acceptable to society and individuals) has turned out to be difficult to prove in clinical trials.
For screening to be efficacious, the test must be able to detect cancer at a stage at which early treatment makes a difference. Herein lie two challenges. A person with a cancer that grows so slowly that early treatment may not make a survival difference will not benefit from screening, and neither will someone with cancer that is so aggressive that early treatment will not significantly slow its malignant outcome. The first scenario is called “overdiagnosis”—a diagnosis made during screening that may not affect the prognosis but can lead to significant anxiety as well as additional testing and treatments, with associated costs. This has yet to be fully addressed in lung cancer screening using repeated CT imaging, but it has been discussed in breast and prostate screening.
Other challenges include how individual physicians will implement a successful lung screening program, which is more complex than yearly mammography, requiring consecutive yearly CT screening with tracking of specific results and incidental findings. How will screening be limited to appropriate patients, as dictated by trial results? Will CT review be as successful in the community as it was in trial centers of excellence? Since smoking (an act of personal choice) is the major risk factor that warrants screening, who should bear the cost?
Then there are potential unintended consequences. What if lung cancer screening makes current smokers more complacent about continuing to smoke? We must increase our educational efforts on smoking cessation, efforts that I sense are having a disappointingly limited impact on the younger generation.
Examine before ordering: An algorithm unchanged by new tests
We rheumatologists may have inadvertently encouraged this practice. We teach about the prevalence of specific autoantibodies in patients with specific, accurately diagnosed autoimmune disorders as opposed to that in the general population (ie, the test’s sensitivity and specificity). But that is different than using a test to diagnose a specific disease in an ill patient with a heretofore undiagnosed condition (ie, the test’s predictive value). When I ask trainees or nonrheumatologists, “Why order all those tests?” the response I often get is that they thought the rheumatologist would want them when he or she was consulted. The fact that I also see our rheumatology fellows requesting the same tests before fully evaluating the patient clinically suggests that we have not done a great job at explaining the clinical utility and limitations of these tests. A serologic test should be used to strengthen or refute the clinician’s preliminary diagnosis, depending on the test’s specificity and sensitivity. It should not be used to generate a diagnosis.
So with these concerns, why would we invite a paper encouraging the use of the relatively new anti-cyclic citrullinated peptide (anti-CCP) test to evaluate patients with possible rheumatoid arthritis (Bose and Calabrese)?
As discussed in that paper, this test has characteristics that are useful when evaluating patients with polyarthritis compatible with the diagnosis of rheumatoid arthritis. Specifically, this test, unlike the traditional test for rheumatoid factor, can help discern whether the arthritis is a reaction to an infection like hepatitis C or endocarditis. Like rheumatoid factor, anti-CCP may precede the appearance of clinically meaningful arthritis and helps to predict prognosis in established rheumatoid arthritis. But, like other serologic tests, the anti-CCP test cannot supplant the listening ears and examining fingers of the clinician in establishing the pretest likelihood of the diagnosis. Clinical evaluation must precede laboratory testing.
We rheumatologists may have inadvertently encouraged this practice. We teach about the prevalence of specific autoantibodies in patients with specific, accurately diagnosed autoimmune disorders as opposed to that in the general population (ie, the test’s sensitivity and specificity). But that is different than using a test to diagnose a specific disease in an ill patient with a heretofore undiagnosed condition (ie, the test’s predictive value). When I ask trainees or nonrheumatologists, “Why order all those tests?” the response I often get is that they thought the rheumatologist would want them when he or she was consulted. The fact that I also see our rheumatology fellows requesting the same tests before fully evaluating the patient clinically suggests that we have not done a great job at explaining the clinical utility and limitations of these tests. A serologic test should be used to strengthen or refute the clinician’s preliminary diagnosis, depending on the test’s specificity and sensitivity. It should not be used to generate a diagnosis.
So with these concerns, why would we invite a paper encouraging the use of the relatively new anti-cyclic citrullinated peptide (anti-CCP) test to evaluate patients with possible rheumatoid arthritis (Bose and Calabrese)?
As discussed in that paper, this test has characteristics that are useful when evaluating patients with polyarthritis compatible with the diagnosis of rheumatoid arthritis. Specifically, this test, unlike the traditional test for rheumatoid factor, can help discern whether the arthritis is a reaction to an infection like hepatitis C or endocarditis. Like rheumatoid factor, anti-CCP may precede the appearance of clinically meaningful arthritis and helps to predict prognosis in established rheumatoid arthritis. But, like other serologic tests, the anti-CCP test cannot supplant the listening ears and examining fingers of the clinician in establishing the pretest likelihood of the diagnosis. Clinical evaluation must precede laboratory testing.
We rheumatologists may have inadvertently encouraged this practice. We teach about the prevalence of specific autoantibodies in patients with specific, accurately diagnosed autoimmune disorders as opposed to that in the general population (ie, the test’s sensitivity and specificity). But that is different than using a test to diagnose a specific disease in an ill patient with a heretofore undiagnosed condition (ie, the test’s predictive value). When I ask trainees or nonrheumatologists, “Why order all those tests?” the response I often get is that they thought the rheumatologist would want them when he or she was consulted. The fact that I also see our rheumatology fellows requesting the same tests before fully evaluating the patient clinically suggests that we have not done a great job at explaining the clinical utility and limitations of these tests. A serologic test should be used to strengthen or refute the clinician’s preliminary diagnosis, depending on the test’s specificity and sensitivity. It should not be used to generate a diagnosis.
So with these concerns, why would we invite a paper encouraging the use of the relatively new anti-cyclic citrullinated peptide (anti-CCP) test to evaluate patients with possible rheumatoid arthritis (Bose and Calabrese)?
As discussed in that paper, this test has characteristics that are useful when evaluating patients with polyarthritis compatible with the diagnosis of rheumatoid arthritis. Specifically, this test, unlike the traditional test for rheumatoid factor, can help discern whether the arthritis is a reaction to an infection like hepatitis C or endocarditis. Like rheumatoid factor, anti-CCP may precede the appearance of clinically meaningful arthritis and helps to predict prognosis in established rheumatoid arthritis. But, like other serologic tests, the anti-CCP test cannot supplant the listening ears and examining fingers of the clinician in establishing the pretest likelihood of the diagnosis. Clinical evaluation must precede laboratory testing.
Exploring the human genome, and relearning genetics by necessity
The ability to scan the entire human genome and to recognize variations in specific nucleotides within recognized genes is more than a technologic feat. It is now possible to assess the risk of some genetic diseases before they are phenotypically expressed. We are increasingly able to predict whether specific drugs will be effective or pose higher risks of adverse effects in individual patients, a field called pharmacogenomics. How much pharmacogenomics can and should be incorporated into our practice as part of personalized medicine remains to be determined,
Genome-wide association studies can answer certain research questions, but also raise additional ones. In some ways, these studies are like molecular epidemiology—they can demonstrate a statistical association between a risk factor and a clinical event such as a heart attack, but just as in traditional epidemiologic studies, association does not always equate with causation.
As discussed by Drs. Manace and Babyatsky in this issue of the Journal, additional techniques can be used to try to sort out the issue of association vs causation—in this case, whether C-reactive protein (CRP) is merely associated with cardiovascular events or is a cause of them. Using the tools of traditional clinical research, it would be ideal to demonstrate that the use of a highly specific inhibitor of the risk factor (CRP) prevents the disease. CRP levels can be lowered with statins, but these drugs also reduce levels of low-density lipoprotein cholesterol, which will lower the risk of cardiac events. Thus, statins do not have the specificity to prove that CRP causes myocardial infarction.
This paper is one of the first in the Journal to discuss advances in genomics that may affect our practice. Beginning in May, the Journal will begin a new series on personalized medicine to highlight the role that genetics and molecular medicine can play in our clinical practice and in our understanding of pathophysiology.
The ability to scan the entire human genome and to recognize variations in specific nucleotides within recognized genes is more than a technologic feat. It is now possible to assess the risk of some genetic diseases before they are phenotypically expressed. We are increasingly able to predict whether specific drugs will be effective or pose higher risks of adverse effects in individual patients, a field called pharmacogenomics. How much pharmacogenomics can and should be incorporated into our practice as part of personalized medicine remains to be determined,
Genome-wide association studies can answer certain research questions, but also raise additional ones. In some ways, these studies are like molecular epidemiology—they can demonstrate a statistical association between a risk factor and a clinical event such as a heart attack, but just as in traditional epidemiologic studies, association does not always equate with causation.
As discussed by Drs. Manace and Babyatsky in this issue of the Journal, additional techniques can be used to try to sort out the issue of association vs causation—in this case, whether C-reactive protein (CRP) is merely associated with cardiovascular events or is a cause of them. Using the tools of traditional clinical research, it would be ideal to demonstrate that the use of a highly specific inhibitor of the risk factor (CRP) prevents the disease. CRP levels can be lowered with statins, but these drugs also reduce levels of low-density lipoprotein cholesterol, which will lower the risk of cardiac events. Thus, statins do not have the specificity to prove that CRP causes myocardial infarction.
This paper is one of the first in the Journal to discuss advances in genomics that may affect our practice. Beginning in May, the Journal will begin a new series on personalized medicine to highlight the role that genetics and molecular medicine can play in our clinical practice and in our understanding of pathophysiology.
The ability to scan the entire human genome and to recognize variations in specific nucleotides within recognized genes is more than a technologic feat. It is now possible to assess the risk of some genetic diseases before they are phenotypically expressed. We are increasingly able to predict whether specific drugs will be effective or pose higher risks of adverse effects in individual patients, a field called pharmacogenomics. How much pharmacogenomics can and should be incorporated into our practice as part of personalized medicine remains to be determined,
Genome-wide association studies can answer certain research questions, but also raise additional ones. In some ways, these studies are like molecular epidemiology—they can demonstrate a statistical association between a risk factor and a clinical event such as a heart attack, but just as in traditional epidemiologic studies, association does not always equate with causation.
As discussed by Drs. Manace and Babyatsky in this issue of the Journal, additional techniques can be used to try to sort out the issue of association vs causation—in this case, whether C-reactive protein (CRP) is merely associated with cardiovascular events or is a cause of them. Using the tools of traditional clinical research, it would be ideal to demonstrate that the use of a highly specific inhibitor of the risk factor (CRP) prevents the disease. CRP levels can be lowered with statins, but these drugs also reduce levels of low-density lipoprotein cholesterol, which will lower the risk of cardiac events. Thus, statins do not have the specificity to prove that CRP causes myocardial infarction.
This paper is one of the first in the Journal to discuss advances in genomics that may affect our practice. Beginning in May, the Journal will begin a new series on personalized medicine to highlight the role that genetics and molecular medicine can play in our clinical practice and in our understanding of pathophysiology.
Talking to patients: Barriers to overcome
Cultural diversity is indeed a barrier we need to clear to provide good health care to all. But the challenge of physician-patient communication goes beyond differences in sex, race, ethnicity, age, and level of literacy. Dialogue between physicians and patients is not always easy. There are barriers everywhere that can obstruct our best plans and impede a successful clinical outcome. And we may not even realize that the patient has hit a barrier until long after the visit, when we discover that medication has been taken “the wrong way” or not at all, that studies were not obtained, or that follow-up visits were not arranged.
Communication barriers include use of medical terms that we assume patients understand, lack of attention to clues of anxiety in our patients or their families that will adversely affect their memory of the visit, not finding out the patient’s actual concerns, and loss of the human connection in our rush to finish charting and to stay on time. But it is this connection that often drives the action plan to a successful conclusion.
What can we do in this era of one patient every 15 minutes? Try to make a genuine connection with every patient. This will enhance engagement and the retention of knowledge. Address the patient’s concerns, not just our own. Write legibly or type in the patient instruction section of the electronic medical record the key messages from the visit—diagnosis, plan, tests yet to be done—and give this to the patient at every visit. It is not insulting to do this, nor is it insulting to explain the details of what may seem like an intuitively obvious procedure or therapy. Ask the patient what his or her major concern is, and be sure to address it.
Often, the biggest barrier is that we physicians forget that each patient comes to us with a unique set of fears, rationalizations, and biases that we need to address (even if initially unspoken), just as we address the challenges of diagnosis and therapy. Patients don’t all think like doctors, but we need to be able to think like patients.
Cultural diversity is indeed a barrier we need to clear to provide good health care to all. But the challenge of physician-patient communication goes beyond differences in sex, race, ethnicity, age, and level of literacy. Dialogue between physicians and patients is not always easy. There are barriers everywhere that can obstruct our best plans and impede a successful clinical outcome. And we may not even realize that the patient has hit a barrier until long after the visit, when we discover that medication has been taken “the wrong way” or not at all, that studies were not obtained, or that follow-up visits were not arranged.
Communication barriers include use of medical terms that we assume patients understand, lack of attention to clues of anxiety in our patients or their families that will adversely affect their memory of the visit, not finding out the patient’s actual concerns, and loss of the human connection in our rush to finish charting and to stay on time. But it is this connection that often drives the action plan to a successful conclusion.
What can we do in this era of one patient every 15 minutes? Try to make a genuine connection with every patient. This will enhance engagement and the retention of knowledge. Address the patient’s concerns, not just our own. Write legibly or type in the patient instruction section of the electronic medical record the key messages from the visit—diagnosis, plan, tests yet to be done—and give this to the patient at every visit. It is not insulting to do this, nor is it insulting to explain the details of what may seem like an intuitively obvious procedure or therapy. Ask the patient what his or her major concern is, and be sure to address it.
Often, the biggest barrier is that we physicians forget that each patient comes to us with a unique set of fears, rationalizations, and biases that we need to address (even if initially unspoken), just as we address the challenges of diagnosis and therapy. Patients don’t all think like doctors, but we need to be able to think like patients.
Cultural diversity is indeed a barrier we need to clear to provide good health care to all. But the challenge of physician-patient communication goes beyond differences in sex, race, ethnicity, age, and level of literacy. Dialogue between physicians and patients is not always easy. There are barriers everywhere that can obstruct our best plans and impede a successful clinical outcome. And we may not even realize that the patient has hit a barrier until long after the visit, when we discover that medication has been taken “the wrong way” or not at all, that studies were not obtained, or that follow-up visits were not arranged.
Communication barriers include use of medical terms that we assume patients understand, lack of attention to clues of anxiety in our patients or their families that will adversely affect their memory of the visit, not finding out the patient’s actual concerns, and loss of the human connection in our rush to finish charting and to stay on time. But it is this connection that often drives the action plan to a successful conclusion.
What can we do in this era of one patient every 15 minutes? Try to make a genuine connection with every patient. This will enhance engagement and the retention of knowledge. Address the patient’s concerns, not just our own. Write legibly or type in the patient instruction section of the electronic medical record the key messages from the visit—diagnosis, plan, tests yet to be done—and give this to the patient at every visit. It is not insulting to do this, nor is it insulting to explain the details of what may seem like an intuitively obvious procedure or therapy. Ask the patient what his or her major concern is, and be sure to address it.
Often, the biggest barrier is that we physicians forget that each patient comes to us with a unique set of fears, rationalizations, and biases that we need to address (even if initially unspoken), just as we address the challenges of diagnosis and therapy. Patients don’t all think like doctors, but we need to be able to think like patients.
Bugs, pundits, evolution, and the New Year
The article on soft-tissue infections in this issue of the Journal by Dr. Sabitha Rajan made me reflect on the relentless march of biology. Pathogens continue to evolve, influenced by human behavior but untouched by self-promoting and partisan dialogue and undaunted by doubting politicians. Several years ago, we could assume that most skin pathogens would readily be controlled by normal body defenses, a few requiring cephalosporin therapy and even fewer needing surgical intervention. But now, environmental pressures, including the zealous use of antibiotics, have altered the microbiology of skin infections. This requires new choices for empiric antibiotic therapy of these infections. With more than just altered susceptibility profiles, these bugs exhibit biologic behaviors distinct from their historic predecessors. The “spider bite” lesion of MRSA and the scarily rapid advance of certain streptococcal infections across tissue planes mandate prompt recognition by astute clinicians—the physical examination still matters.
The brisk evolutionary pace of this new range of infections stokes the urgent need to rapidly develop novel antibiotics, a process caught smack in the middle of our pundits’ political debates. Will the development of drugs for uncommon but serious infections be underwritten by the government, or will companies be required to bear the full expense of developing drugs under the scrutiny of the FDA? Will they then be pressed to price them “affordably” or price them to recoup estimated development costs, only to have payors list them as “third-tier” on the formulary, thus making them unaffordable to many patients? Our ability to medically confront this evolution will be directly affected by the outcome of the current political debate. Will all patients be able to easily access medical care so that early significant infections are recognized for what they are, and will the new antibiotics required for appropriate treatment be affordable? This year is going to be an interesting one.
So, as empiric therapy with cephalexin changes to clindamycin and 2011 rolls into 2012, I and our editorial staff offer our sincere wishes for a healthy, happy, and especially a peaceful New Year.
The article on soft-tissue infections in this issue of the Journal by Dr. Sabitha Rajan made me reflect on the relentless march of biology. Pathogens continue to evolve, influenced by human behavior but untouched by self-promoting and partisan dialogue and undaunted by doubting politicians. Several years ago, we could assume that most skin pathogens would readily be controlled by normal body defenses, a few requiring cephalosporin therapy and even fewer needing surgical intervention. But now, environmental pressures, including the zealous use of antibiotics, have altered the microbiology of skin infections. This requires new choices for empiric antibiotic therapy of these infections. With more than just altered susceptibility profiles, these bugs exhibit biologic behaviors distinct from their historic predecessors. The “spider bite” lesion of MRSA and the scarily rapid advance of certain streptococcal infections across tissue planes mandate prompt recognition by astute clinicians—the physical examination still matters.
The brisk evolutionary pace of this new range of infections stokes the urgent need to rapidly develop novel antibiotics, a process caught smack in the middle of our pundits’ political debates. Will the development of drugs for uncommon but serious infections be underwritten by the government, or will companies be required to bear the full expense of developing drugs under the scrutiny of the FDA? Will they then be pressed to price them “affordably” or price them to recoup estimated development costs, only to have payors list them as “third-tier” on the formulary, thus making them unaffordable to many patients? Our ability to medically confront this evolution will be directly affected by the outcome of the current political debate. Will all patients be able to easily access medical care so that early significant infections are recognized for what they are, and will the new antibiotics required for appropriate treatment be affordable? This year is going to be an interesting one.
So, as empiric therapy with cephalexin changes to clindamycin and 2011 rolls into 2012, I and our editorial staff offer our sincere wishes for a healthy, happy, and especially a peaceful New Year.
The article on soft-tissue infections in this issue of the Journal by Dr. Sabitha Rajan made me reflect on the relentless march of biology. Pathogens continue to evolve, influenced by human behavior but untouched by self-promoting and partisan dialogue and undaunted by doubting politicians. Several years ago, we could assume that most skin pathogens would readily be controlled by normal body defenses, a few requiring cephalosporin therapy and even fewer needing surgical intervention. But now, environmental pressures, including the zealous use of antibiotics, have altered the microbiology of skin infections. This requires new choices for empiric antibiotic therapy of these infections. With more than just altered susceptibility profiles, these bugs exhibit biologic behaviors distinct from their historic predecessors. The “spider bite” lesion of MRSA and the scarily rapid advance of certain streptococcal infections across tissue planes mandate prompt recognition by astute clinicians—the physical examination still matters.
The brisk evolutionary pace of this new range of infections stokes the urgent need to rapidly develop novel antibiotics, a process caught smack in the middle of our pundits’ political debates. Will the development of drugs for uncommon but serious infections be underwritten by the government, or will companies be required to bear the full expense of developing drugs under the scrutiny of the FDA? Will they then be pressed to price them “affordably” or price them to recoup estimated development costs, only to have payors list them as “third-tier” on the formulary, thus making them unaffordable to many patients? Our ability to medically confront this evolution will be directly affected by the outcome of the current political debate. Will all patients be able to easily access medical care so that early significant infections are recognized for what they are, and will the new antibiotics required for appropriate treatment be affordable? This year is going to be an interesting one.
So, as empiric therapy with cephalexin changes to clindamycin and 2011 rolls into 2012, I and our editorial staff offer our sincere wishes for a healthy, happy, and especially a peaceful New Year.
Quality, frailty, and common sense
Congestive heart failure, as noted in the review by Samala et al in this issue of the Journal, is more prevalent in the elderly. Particularly in the frail elderly, managing severe congestive heart failure poses ethical, socioeconomic, and medical challenges. The presence of even subtle cognitive impairment requires detailed dialogue with family and caregivers about medications and about symptoms that warrant a trip to the emergency room. Patients on a fixed income may not be able to afford their medications and thus may use them sporadically. And the preprepared foods they often eat are laden with sodium.
The symptoms of congestive heart failure may easily go unrecognized or be attributed to other common problems. Sorting out the reasons for exertional fatigue, especially a generalized sense of fatigue, can be particularly vexing. Anemia and sarcopenia can directly cause exertional fatigue or “weakness” but may also exacerbate heart failure and cause similar symptoms. Pharmacologic and dietary causes for volume overload must be sought. Even intermittent use of over-the-counter nonsteroidal anti-inflammatory drugs can be problematic.
Severe congestive heart failure is a lethal disease. Current quality guidelines for its treatment emphasize the use of multiple drugs and devices. Yet vasoactive drugs may not be well tolerated in frail patients, who are particularly vulnerable to orthostatic hypotension and cerebral hypoperfusion. Digoxin, of marginal benefit in younger patients without tachyarrhythmias, has an even more tenuous risk-benefit ratio in the frail elderly. Beta-blockers may cause fatigue and depression, and even low-dose diuretics can exacerbate symptoms of bladder dysfunction. Previously implanted defibrillators may be inconsistent with the patient’s current end-of-life desires.
Ideal management of the genuinely frail elderly patient with severe congestive heart failure is not always a matter of ventricular assist devices, biventricular pacers, or angiotensin-converting enzyme inhibitors. At some point, referral to palliative care resources, guided by informed input from the patient, family members, and caregivers, may be the most appropriate high-quality care that we can (and should) offer.
Congestive heart failure, as noted in the review by Samala et al in this issue of the Journal, is more prevalent in the elderly. Particularly in the frail elderly, managing severe congestive heart failure poses ethical, socioeconomic, and medical challenges. The presence of even subtle cognitive impairment requires detailed dialogue with family and caregivers about medications and about symptoms that warrant a trip to the emergency room. Patients on a fixed income may not be able to afford their medications and thus may use them sporadically. And the preprepared foods they often eat are laden with sodium.
The symptoms of congestive heart failure may easily go unrecognized or be attributed to other common problems. Sorting out the reasons for exertional fatigue, especially a generalized sense of fatigue, can be particularly vexing. Anemia and sarcopenia can directly cause exertional fatigue or “weakness” but may also exacerbate heart failure and cause similar symptoms. Pharmacologic and dietary causes for volume overload must be sought. Even intermittent use of over-the-counter nonsteroidal anti-inflammatory drugs can be problematic.
Severe congestive heart failure is a lethal disease. Current quality guidelines for its treatment emphasize the use of multiple drugs and devices. Yet vasoactive drugs may not be well tolerated in frail patients, who are particularly vulnerable to orthostatic hypotension and cerebral hypoperfusion. Digoxin, of marginal benefit in younger patients without tachyarrhythmias, has an even more tenuous risk-benefit ratio in the frail elderly. Beta-blockers may cause fatigue and depression, and even low-dose diuretics can exacerbate symptoms of bladder dysfunction. Previously implanted defibrillators may be inconsistent with the patient’s current end-of-life desires.
Ideal management of the genuinely frail elderly patient with severe congestive heart failure is not always a matter of ventricular assist devices, biventricular pacers, or angiotensin-converting enzyme inhibitors. At some point, referral to palliative care resources, guided by informed input from the patient, family members, and caregivers, may be the most appropriate high-quality care that we can (and should) offer.
Congestive heart failure, as noted in the review by Samala et al in this issue of the Journal, is more prevalent in the elderly. Particularly in the frail elderly, managing severe congestive heart failure poses ethical, socioeconomic, and medical challenges. The presence of even subtle cognitive impairment requires detailed dialogue with family and caregivers about medications and about symptoms that warrant a trip to the emergency room. Patients on a fixed income may not be able to afford their medications and thus may use them sporadically. And the preprepared foods they often eat are laden with sodium.
The symptoms of congestive heart failure may easily go unrecognized or be attributed to other common problems. Sorting out the reasons for exertional fatigue, especially a generalized sense of fatigue, can be particularly vexing. Anemia and sarcopenia can directly cause exertional fatigue or “weakness” but may also exacerbate heart failure and cause similar symptoms. Pharmacologic and dietary causes for volume overload must be sought. Even intermittent use of over-the-counter nonsteroidal anti-inflammatory drugs can be problematic.
Severe congestive heart failure is a lethal disease. Current quality guidelines for its treatment emphasize the use of multiple drugs and devices. Yet vasoactive drugs may not be well tolerated in frail patients, who are particularly vulnerable to orthostatic hypotension and cerebral hypoperfusion. Digoxin, of marginal benefit in younger patients without tachyarrhythmias, has an even more tenuous risk-benefit ratio in the frail elderly. Beta-blockers may cause fatigue and depression, and even low-dose diuretics can exacerbate symptoms of bladder dysfunction. Previously implanted defibrillators may be inconsistent with the patient’s current end-of-life desires.
Ideal management of the genuinely frail elderly patient with severe congestive heart failure is not always a matter of ventricular assist devices, biventricular pacers, or angiotensin-converting enzyme inhibitors. At some point, referral to palliative care resources, guided by informed input from the patient, family members, and caregivers, may be the most appropriate high-quality care that we can (and should) offer.
The bittersweet of steroid therapy
For many of those long-term effects, such as osteoporosis, cushingoid features, skin fragility, and cataracts, all we can do is hope that they don’t occur, since there is little we can do to screen for or prevent them. We have previously discussed steroid-associated osteoporosis in the Journal,1 and strategies for preventing it have been proposed by specialty societies.2 For other complications such as hypertension, weight gain, and glucose intolerance, we can offer common-sense protective suggestions, monitor for them, and intervene if they occur.
In this issue, Dr. M. Cecilia Lansang and Ms. Leighanne Kramer Hustak3 discuss the management of steroid-induced adrenal suppression and diabetes. They offer practical management suggestions but also point out that the evidence base for our treatment decisions is surprisingly limited.
Nearly all patients chronically receiving high-dose glucocorticoid therapy develop glucose intolerance, but knowing when that is happening is not always easy. In patients destined to develop type 2 diabetes, the laboratory or clinical signs of hyperglycemia appear only when the pancreas can no longer maintain the insulin production necessary to overcome peripheral insulin resistance. Steroid-induced diabetes is characterized by increased gluconeogenesis, insulin resistance, and excessive postprandial surges, so fasting glucose levels are not sensitive for this clinical syndrome.
The degree and duration of the chronic hyperinsulinemia and hyperglycemia dictates the risk of microvascular complications and thus will be linked to duration of steroid therapy (unless the steroid is unmasking preexisting mild diabetes). Although issues surrounding tight control of blood glucose levels in the acute setting remain unresolved, I believe that even short-term significant steroid-induced hyperglycemia should be prevented when reasonably possible, at the least keeping in mind the additive ill effects of hyperglycemia and steroid therapy on the risk of nuisance infections such as oral and vaginal candidiasis and urinary tract infections that, in the setting of high-dose steroid therapy, can rapidly turn nasty.
- Dore RK. How to prevent glucocorticoid-induced osteoporosis. Cleve Clin J Med 2010; 77:529–536.
- American College of Rheumatology Ad Hoc Committee on Glucocorticoid-Induced Osteoporosis. Recommendations for the prevention and treatment of glucocorticoid-induced osteoporosis: 2001 update. Arthritis Rheum 2001; 44:1496–1503.
- Lansang MC, Hustak LK. Glucocorticoid-induced diabetes and adrenal suppression: how to detect and manage them. Cleve Clin J Med 2011; 78:748–756.
For many of those long-term effects, such as osteoporosis, cushingoid features, skin fragility, and cataracts, all we can do is hope that they don’t occur, since there is little we can do to screen for or prevent them. We have previously discussed steroid-associated osteoporosis in the Journal,1 and strategies for preventing it have been proposed by specialty societies.2 For other complications such as hypertension, weight gain, and glucose intolerance, we can offer common-sense protective suggestions, monitor for them, and intervene if they occur.
In this issue, Dr. M. Cecilia Lansang and Ms. Leighanne Kramer Hustak3 discuss the management of steroid-induced adrenal suppression and diabetes. They offer practical management suggestions but also point out that the evidence base for our treatment decisions is surprisingly limited.
Nearly all patients chronically receiving high-dose glucocorticoid therapy develop glucose intolerance, but knowing when that is happening is not always easy. In patients destined to develop type 2 diabetes, the laboratory or clinical signs of hyperglycemia appear only when the pancreas can no longer maintain the insulin production necessary to overcome peripheral insulin resistance. Steroid-induced diabetes is characterized by increased gluconeogenesis, insulin resistance, and excessive postprandial surges, so fasting glucose levels are not sensitive for this clinical syndrome.
The degree and duration of the chronic hyperinsulinemia and hyperglycemia dictates the risk of microvascular complications and thus will be linked to duration of steroid therapy (unless the steroid is unmasking preexisting mild diabetes). Although issues surrounding tight control of blood glucose levels in the acute setting remain unresolved, I believe that even short-term significant steroid-induced hyperglycemia should be prevented when reasonably possible, at the least keeping in mind the additive ill effects of hyperglycemia and steroid therapy on the risk of nuisance infections such as oral and vaginal candidiasis and urinary tract infections that, in the setting of high-dose steroid therapy, can rapidly turn nasty.
For many of those long-term effects, such as osteoporosis, cushingoid features, skin fragility, and cataracts, all we can do is hope that they don’t occur, since there is little we can do to screen for or prevent them. We have previously discussed steroid-associated osteoporosis in the Journal,1 and strategies for preventing it have been proposed by specialty societies.2 For other complications such as hypertension, weight gain, and glucose intolerance, we can offer common-sense protective suggestions, monitor for them, and intervene if they occur.
In this issue, Dr. M. Cecilia Lansang and Ms. Leighanne Kramer Hustak3 discuss the management of steroid-induced adrenal suppression and diabetes. They offer practical management suggestions but also point out that the evidence base for our treatment decisions is surprisingly limited.
Nearly all patients chronically receiving high-dose glucocorticoid therapy develop glucose intolerance, but knowing when that is happening is not always easy. In patients destined to develop type 2 diabetes, the laboratory or clinical signs of hyperglycemia appear only when the pancreas can no longer maintain the insulin production necessary to overcome peripheral insulin resistance. Steroid-induced diabetes is characterized by increased gluconeogenesis, insulin resistance, and excessive postprandial surges, so fasting glucose levels are not sensitive for this clinical syndrome.
The degree and duration of the chronic hyperinsulinemia and hyperglycemia dictates the risk of microvascular complications and thus will be linked to duration of steroid therapy (unless the steroid is unmasking preexisting mild diabetes). Although issues surrounding tight control of blood glucose levels in the acute setting remain unresolved, I believe that even short-term significant steroid-induced hyperglycemia should be prevented when reasonably possible, at the least keeping in mind the additive ill effects of hyperglycemia and steroid therapy on the risk of nuisance infections such as oral and vaginal candidiasis and urinary tract infections that, in the setting of high-dose steroid therapy, can rapidly turn nasty.
- Dore RK. How to prevent glucocorticoid-induced osteoporosis. Cleve Clin J Med 2010; 77:529–536.
- American College of Rheumatology Ad Hoc Committee on Glucocorticoid-Induced Osteoporosis. Recommendations for the prevention and treatment of glucocorticoid-induced osteoporosis: 2001 update. Arthritis Rheum 2001; 44:1496–1503.
- Lansang MC, Hustak LK. Glucocorticoid-induced diabetes and adrenal suppression: how to detect and manage them. Cleve Clin J Med 2011; 78:748–756.
- Dore RK. How to prevent glucocorticoid-induced osteoporosis. Cleve Clin J Med 2010; 77:529–536.
- American College of Rheumatology Ad Hoc Committee on Glucocorticoid-Induced Osteoporosis. Recommendations for the prevention and treatment of glucocorticoid-induced osteoporosis: 2001 update. Arthritis Rheum 2001; 44:1496–1503.
- Lansang MC, Hustak LK. Glucocorticoid-induced diabetes and adrenal suppression: how to detect and manage them. Cleve Clin J Med 2011; 78:748–756.
A discussion of dissection
Dr. Alan C. Braverman, in this issue of the Journal, discusses thoracic aortic dissection. To most of us who do not routinely treat aortic disease, it may not seem that much has changed since that Thanksgiving in Philadelphia. Atherosclerosis is still a common risk, surgery is the treatment for ascending dissection, beta-blockers are useful for chronic descending dissections, and the mortality rate is enormously high when dissections bleed.
As internists, we consider the possibility of genetic disorders in patients with a family history of dissection or aneurysm, but we don’t really expect to find many, and most of us don’t often track advances in the understanding of these disorders at the molecular level. At the time I was working in that emergency room, Marfan syndrome was viewed as a connective tissue disorder, with a structurally weak aortic wall and variable other morphologic features. When the molecular defect was defined as fibrillin-1 deficiency, I didn’t think much more than that the weak link of the aorta’s fibrous belt was identified.
But it turns out that fibrillin is not just an aortic girdle; fibrillin lowers the concentration of the cytokine transforming growth factor (TGF)-beta in the aorta (and other organs) by promoting its sequestration in the extracellular matrix. Absence of fibrillin enhances TGF-beta activity, and excess TGF-beta can produce Marfan syndrome in young mice. In maybe the most striking consequence of this line of research, Dietz and colleagues1 have demonstrated that the specific antagonism of the angiotensin II type 1 receptor by the drug losartan (Cozaar) also blocks the effects of TGF-beta and consequently blocks the development of murine Marfan syndrome. And in a preliminary study, it slowed aneurysm progression in a small group of children with Marfan syndrome.
This does not imply that the same pathophysiology is at play in all aortic aneurysms. But at a time of new guidelines for screening for abdominal aneurysm, these observations offer a novel paradigm for developing drug therapies as an alternative to the mad rush for the vascular operating suite.
- Brooke BS, Habashi JP, Judge DP, Patel N, Loeys B, Dietz HC. Angiotensin II blockade and aortic-root dilation in Marfan’s syndrome. N Engl J Med 2008; 358:2787–2795.
Dr. Alan C. Braverman, in this issue of the Journal, discusses thoracic aortic dissection. To most of us who do not routinely treat aortic disease, it may not seem that much has changed since that Thanksgiving in Philadelphia. Atherosclerosis is still a common risk, surgery is the treatment for ascending dissection, beta-blockers are useful for chronic descending dissections, and the mortality rate is enormously high when dissections bleed.
As internists, we consider the possibility of genetic disorders in patients with a family history of dissection or aneurysm, but we don’t really expect to find many, and most of us don’t often track advances in the understanding of these disorders at the molecular level. At the time I was working in that emergency room, Marfan syndrome was viewed as a connective tissue disorder, with a structurally weak aortic wall and variable other morphologic features. When the molecular defect was defined as fibrillin-1 deficiency, I didn’t think much more than that the weak link of the aorta’s fibrous belt was identified.
But it turns out that fibrillin is not just an aortic girdle; fibrillin lowers the concentration of the cytokine transforming growth factor (TGF)-beta in the aorta (and other organs) by promoting its sequestration in the extracellular matrix. Absence of fibrillin enhances TGF-beta activity, and excess TGF-beta can produce Marfan syndrome in young mice. In maybe the most striking consequence of this line of research, Dietz and colleagues1 have demonstrated that the specific antagonism of the angiotensin II type 1 receptor by the drug losartan (Cozaar) also blocks the effects of TGF-beta and consequently blocks the development of murine Marfan syndrome. And in a preliminary study, it slowed aneurysm progression in a small group of children with Marfan syndrome.
This does not imply that the same pathophysiology is at play in all aortic aneurysms. But at a time of new guidelines for screening for abdominal aneurysm, these observations offer a novel paradigm for developing drug therapies as an alternative to the mad rush for the vascular operating suite.
Dr. Alan C. Braverman, in this issue of the Journal, discusses thoracic aortic dissection. To most of us who do not routinely treat aortic disease, it may not seem that much has changed since that Thanksgiving in Philadelphia. Atherosclerosis is still a common risk, surgery is the treatment for ascending dissection, beta-blockers are useful for chronic descending dissections, and the mortality rate is enormously high when dissections bleed.
As internists, we consider the possibility of genetic disorders in patients with a family history of dissection or aneurysm, but we don’t really expect to find many, and most of us don’t often track advances in the understanding of these disorders at the molecular level. At the time I was working in that emergency room, Marfan syndrome was viewed as a connective tissue disorder, with a structurally weak aortic wall and variable other morphologic features. When the molecular defect was defined as fibrillin-1 deficiency, I didn’t think much more than that the weak link of the aorta’s fibrous belt was identified.
But it turns out that fibrillin is not just an aortic girdle; fibrillin lowers the concentration of the cytokine transforming growth factor (TGF)-beta in the aorta (and other organs) by promoting its sequestration in the extracellular matrix. Absence of fibrillin enhances TGF-beta activity, and excess TGF-beta can produce Marfan syndrome in young mice. In maybe the most striking consequence of this line of research, Dietz and colleagues1 have demonstrated that the specific antagonism of the angiotensin II type 1 receptor by the drug losartan (Cozaar) also blocks the effects of TGF-beta and consequently blocks the development of murine Marfan syndrome. And in a preliminary study, it slowed aneurysm progression in a small group of children with Marfan syndrome.
This does not imply that the same pathophysiology is at play in all aortic aneurysms. But at a time of new guidelines for screening for abdominal aneurysm, these observations offer a novel paradigm for developing drug therapies as an alternative to the mad rush for the vascular operating suite.
- Brooke BS, Habashi JP, Judge DP, Patel N, Loeys B, Dietz HC. Angiotensin II blockade and aortic-root dilation in Marfan’s syndrome. N Engl J Med 2008; 358:2787–2795.
- Brooke BS, Habashi JP, Judge DP, Patel N, Loeys B, Dietz HC. Angiotensin II blockade and aortic-root dilation in Marfan’s syndrome. N Engl J Med 2008; 358:2787–2795.