User login
Reflux redux
Symptoms compatible with gastroesophageal reflux disease (GERD) are incredibly prevalent. The typical ones are common, and the atypical ones are so often attributed to GERD that they too are extremely common. It seems that few patients in my clinic are not taking a proton pump inhibitor (PPI).
Drs. Alzubaidi and Gabbard, in their review of GERD in this issue, note that up to 40% of people experience symptoms of GERD at least once monthly. Since these symptoms can be intermittent, diagnosis poses a problem when the diagnostic algorithm includes a trial of a PPI. It is sometimes unclear whether PPI therapy relieved the symptoms or whether the symptoms abated for other reasons. I suspect that many patients remain on PPI therapy longer than needed (and often longer than initially intended) because of a false sense of improvement and continued need. When patients are diagnosed on clinical grounds, we need to intermittently reassess the continued need for PPI therapy. The authors discuss and place in reasonable perspective a few of the potential complications of chronic PPI use, but not the effects on absorption of iron, calcium, and micronutrients, or PPI-associated gastric polyposis. These can be clinically significant in some patients.
I believe that some atypical symptoms such as cough and hoarseness are overly attributed to GERD, so that PPI therapy is started, continued, and escalated due to premature closure of the diagnosis. I believe that the diagnosis should be reassessed at least once with observed withdrawal of PPI therapy in patients who did not have a firm physiologic diagnosis. Asking the patient to keep a symptom diary may help.
Lack of a significant response to PPI therapy should cast doubt on the diagnosis of GERD and warrant exploration for an alternative cause of the symptoms (eg, eosinophilic esophagitis, bile reflux, sinus disease, dysmotility). The possibility that the patient was not given an optimal trial of a PPI must also be considered: eg, the dose may have been inadequate, the timing of administration may have been suboptimal (not preprandial), or the patient may have been taking over-the-counter NSAIDs.
GERD is so prevalent in the general population that we must train ourselves to consider the possibility that, even if totally relieved by PPI therapy, the symptoms might be associated with aggravating comorbid conditions such as obstructive sleep apnea, Raynaud phenomenon, drugs that can decrease the tone of the lower esophageal sphincter, or even scleroderma.
Finally, in patients who have had a less-than-total response to full-dose PPI therapy and have had other diagnoses excluded, we shouldn’t forget the value of adding appropriately timed histamine 2 receptor antagonist therapy (and asking the patient about use of medications that can exacerbate symptoms).
Even the diseases we deal with every day sometimes warrant a second look.
Symptoms compatible with gastroesophageal reflux disease (GERD) are incredibly prevalent. The typical ones are common, and the atypical ones are so often attributed to GERD that they too are extremely common. It seems that few patients in my clinic are not taking a proton pump inhibitor (PPI).
Drs. Alzubaidi and Gabbard, in their review of GERD in this issue, note that up to 40% of people experience symptoms of GERD at least once monthly. Since these symptoms can be intermittent, diagnosis poses a problem when the diagnostic algorithm includes a trial of a PPI. It is sometimes unclear whether PPI therapy relieved the symptoms or whether the symptoms abated for other reasons. I suspect that many patients remain on PPI therapy longer than needed (and often longer than initially intended) because of a false sense of improvement and continued need. When patients are diagnosed on clinical grounds, we need to intermittently reassess the continued need for PPI therapy. The authors discuss and place in reasonable perspective a few of the potential complications of chronic PPI use, but not the effects on absorption of iron, calcium, and micronutrients, or PPI-associated gastric polyposis. These can be clinically significant in some patients.
I believe that some atypical symptoms such as cough and hoarseness are overly attributed to GERD, so that PPI therapy is started, continued, and escalated due to premature closure of the diagnosis. I believe that the diagnosis should be reassessed at least once with observed withdrawal of PPI therapy in patients who did not have a firm physiologic diagnosis. Asking the patient to keep a symptom diary may help.
Lack of a significant response to PPI therapy should cast doubt on the diagnosis of GERD and warrant exploration for an alternative cause of the symptoms (eg, eosinophilic esophagitis, bile reflux, sinus disease, dysmotility). The possibility that the patient was not given an optimal trial of a PPI must also be considered: eg, the dose may have been inadequate, the timing of administration may have been suboptimal (not preprandial), or the patient may have been taking over-the-counter NSAIDs.
GERD is so prevalent in the general population that we must train ourselves to consider the possibility that, even if totally relieved by PPI therapy, the symptoms might be associated with aggravating comorbid conditions such as obstructive sleep apnea, Raynaud phenomenon, drugs that can decrease the tone of the lower esophageal sphincter, or even scleroderma.
Finally, in patients who have had a less-than-total response to full-dose PPI therapy and have had other diagnoses excluded, we shouldn’t forget the value of adding appropriately timed histamine 2 receptor antagonist therapy (and asking the patient about use of medications that can exacerbate symptoms).
Even the diseases we deal with every day sometimes warrant a second look.
Symptoms compatible with gastroesophageal reflux disease (GERD) are incredibly prevalent. The typical ones are common, and the atypical ones are so often attributed to GERD that they too are extremely common. It seems that few patients in my clinic are not taking a proton pump inhibitor (PPI).
Drs. Alzubaidi and Gabbard, in their review of GERD in this issue, note that up to 40% of people experience symptoms of GERD at least once monthly. Since these symptoms can be intermittent, diagnosis poses a problem when the diagnostic algorithm includes a trial of a PPI. It is sometimes unclear whether PPI therapy relieved the symptoms or whether the symptoms abated for other reasons. I suspect that many patients remain on PPI therapy longer than needed (and often longer than initially intended) because of a false sense of improvement and continued need. When patients are diagnosed on clinical grounds, we need to intermittently reassess the continued need for PPI therapy. The authors discuss and place in reasonable perspective a few of the potential complications of chronic PPI use, but not the effects on absorption of iron, calcium, and micronutrients, or PPI-associated gastric polyposis. These can be clinically significant in some patients.
I believe that some atypical symptoms such as cough and hoarseness are overly attributed to GERD, so that PPI therapy is started, continued, and escalated due to premature closure of the diagnosis. I believe that the diagnosis should be reassessed at least once with observed withdrawal of PPI therapy in patients who did not have a firm physiologic diagnosis. Asking the patient to keep a symptom diary may help.
Lack of a significant response to PPI therapy should cast doubt on the diagnosis of GERD and warrant exploration for an alternative cause of the symptoms (eg, eosinophilic esophagitis, bile reflux, sinus disease, dysmotility). The possibility that the patient was not given an optimal trial of a PPI must also be considered: eg, the dose may have been inadequate, the timing of administration may have been suboptimal (not preprandial), or the patient may have been taking over-the-counter NSAIDs.
GERD is so prevalent in the general population that we must train ourselves to consider the possibility that, even if totally relieved by PPI therapy, the symptoms might be associated with aggravating comorbid conditions such as obstructive sleep apnea, Raynaud phenomenon, drugs that can decrease the tone of the lower esophageal sphincter, or even scleroderma.
Finally, in patients who have had a less-than-total response to full-dose PPI therapy and have had other diagnoses excluded, we shouldn’t forget the value of adding appropriately timed histamine 2 receptor antagonist therapy (and asking the patient about use of medications that can exacerbate symptoms).
Even the diseases we deal with every day sometimes warrant a second look.
Perioperative MI: Data, practice, and questions
Except in emergency or specific high-risk surgery, or for extremely fragile high-risk patients, we anticipate a successful outcome from noncardiac surgery. The skills and tools of our anesthesiology colleagues have advanced to the point that severe intraoperative and immediate postoperative complications are rare.
Preoperative risk assessment and perioperative medical management in large medical centers are now largely done by hospital-based physicians with interest and expertise in this subspecialty, and are integrated into the care of the surgical patient. This has likely contributed to improved patient outcomes. Yet postoperative cardiovascular events still cause significant morbidity (although they generally occur in less than 10% of patients).
The entity of perioperative myocardial infarction (MI) has an interesting history. We have recognized for several decades that its presentation is often different than the usually diagnosed MI: perioperative MI is often painless and may manifest as unexplained sinus tachycardia, subtle changes in mental status, or mild dyspnea. These symptoms, if they occurred while the patient was at home, would often be mild enough that the patient would not seek immediate medical attention. Autopsy studies suggested that many of these MIs result from a different pathophysiology than the garden variety MI; plaque rupture with or without secondary thrombosis may be less common than myocardial injury resulting from an imbalance between cardiac demand and blood flow. Studies initially suggested that postoperative MI occurred many days after the surgery. But as tests to diagnose myocyte injury became more sensitive (electrocardiography, creatine kinase, creatine kinase MB, and now troponin), it was recognized that cardiac injury actually occurred very soon after or even during surgery.
With the advent of highly sensitive and fairly specific troponin assays, it seems that perioperative cardiac injury is extremely common, perhaps occurring in up to 20% of patients (if we include patients at high risk based on traditional criteria). This has led to the newly described entity of “myocardial injury after noncardiac surgery” (MINS). MINS patients, diagnosed by troponin elevations, usually are asymptomatic, and many do not meet criteria for any type of MI. But strikingly, as discussed in this issue of the Journal by Horr et al, simply having a postoperative troponin elevation predicts an increased risk of clinical cardiovascular events and a decreased 30-day survival rate.
Adding postoperative troponin measurement to the usual preoperative screening protocol significantly increases our ability to predict delayed cardiovascular events and mortality. As pointed out by Cohn in his accompanying editorial, the benefit, if any, of screening low-risk patients remains to be defined. But an even more important issue, as commented upon in both papers, is what to do when an elevated troponin is detected in a postoperative patient who is otherwise doing perfectly well. Given our current knowledge of the pathophysiology of postoperative MI and the still overall low mortality, it seems unreasonable to immediately take all of these patients to the catheterization suite. Yet with current knowledge of the prognostic significance of troponin elevation, this can’t be ignored. Should all patients receive immediate high-intensity statin therapy, antiplatelet therapy if safe in the specific perioperative setting, and postdischarge physiologic stress studies, or should we “just” take it as a potential high-impact teaching moment and advise patients of their increased cardiovascular risk and offer our usual heart-healthy admonitions?
The confirmed observation that postoperative troponin elevation predicts morbidity and mortality over the subsequent 30 days, and perhaps even longer, has triggered the start of several interventional trials. The results of these will, hopefully, help us to further improve perioperative outcomes.
Except in emergency or specific high-risk surgery, or for extremely fragile high-risk patients, we anticipate a successful outcome from noncardiac surgery. The skills and tools of our anesthesiology colleagues have advanced to the point that severe intraoperative and immediate postoperative complications are rare.
Preoperative risk assessment and perioperative medical management in large medical centers are now largely done by hospital-based physicians with interest and expertise in this subspecialty, and are integrated into the care of the surgical patient. This has likely contributed to improved patient outcomes. Yet postoperative cardiovascular events still cause significant morbidity (although they generally occur in less than 10% of patients).
The entity of perioperative myocardial infarction (MI) has an interesting history. We have recognized for several decades that its presentation is often different than the usually diagnosed MI: perioperative MI is often painless and may manifest as unexplained sinus tachycardia, subtle changes in mental status, or mild dyspnea. These symptoms, if they occurred while the patient was at home, would often be mild enough that the patient would not seek immediate medical attention. Autopsy studies suggested that many of these MIs result from a different pathophysiology than the garden variety MI; plaque rupture with or without secondary thrombosis may be less common than myocardial injury resulting from an imbalance between cardiac demand and blood flow. Studies initially suggested that postoperative MI occurred many days after the surgery. But as tests to diagnose myocyte injury became more sensitive (electrocardiography, creatine kinase, creatine kinase MB, and now troponin), it was recognized that cardiac injury actually occurred very soon after or even during surgery.
With the advent of highly sensitive and fairly specific troponin assays, it seems that perioperative cardiac injury is extremely common, perhaps occurring in up to 20% of patients (if we include patients at high risk based on traditional criteria). This has led to the newly described entity of “myocardial injury after noncardiac surgery” (MINS). MINS patients, diagnosed by troponin elevations, usually are asymptomatic, and many do not meet criteria for any type of MI. But strikingly, as discussed in this issue of the Journal by Horr et al, simply having a postoperative troponin elevation predicts an increased risk of clinical cardiovascular events and a decreased 30-day survival rate.
Adding postoperative troponin measurement to the usual preoperative screening protocol significantly increases our ability to predict delayed cardiovascular events and mortality. As pointed out by Cohn in his accompanying editorial, the benefit, if any, of screening low-risk patients remains to be defined. But an even more important issue, as commented upon in both papers, is what to do when an elevated troponin is detected in a postoperative patient who is otherwise doing perfectly well. Given our current knowledge of the pathophysiology of postoperative MI and the still overall low mortality, it seems unreasonable to immediately take all of these patients to the catheterization suite. Yet with current knowledge of the prognostic significance of troponin elevation, this can’t be ignored. Should all patients receive immediate high-intensity statin therapy, antiplatelet therapy if safe in the specific perioperative setting, and postdischarge physiologic stress studies, or should we “just” take it as a potential high-impact teaching moment and advise patients of their increased cardiovascular risk and offer our usual heart-healthy admonitions?
The confirmed observation that postoperative troponin elevation predicts morbidity and mortality over the subsequent 30 days, and perhaps even longer, has triggered the start of several interventional trials. The results of these will, hopefully, help us to further improve perioperative outcomes.
Except in emergency or specific high-risk surgery, or for extremely fragile high-risk patients, we anticipate a successful outcome from noncardiac surgery. The skills and tools of our anesthesiology colleagues have advanced to the point that severe intraoperative and immediate postoperative complications are rare.
Preoperative risk assessment and perioperative medical management in large medical centers are now largely done by hospital-based physicians with interest and expertise in this subspecialty, and are integrated into the care of the surgical patient. This has likely contributed to improved patient outcomes. Yet postoperative cardiovascular events still cause significant morbidity (although they generally occur in less than 10% of patients).
The entity of perioperative myocardial infarction (MI) has an interesting history. We have recognized for several decades that its presentation is often different than the usually diagnosed MI: perioperative MI is often painless and may manifest as unexplained sinus tachycardia, subtle changes in mental status, or mild dyspnea. These symptoms, if they occurred while the patient was at home, would often be mild enough that the patient would not seek immediate medical attention. Autopsy studies suggested that many of these MIs result from a different pathophysiology than the garden variety MI; plaque rupture with or without secondary thrombosis may be less common than myocardial injury resulting from an imbalance between cardiac demand and blood flow. Studies initially suggested that postoperative MI occurred many days after the surgery. But as tests to diagnose myocyte injury became more sensitive (electrocardiography, creatine kinase, creatine kinase MB, and now troponin), it was recognized that cardiac injury actually occurred very soon after or even during surgery.
With the advent of highly sensitive and fairly specific troponin assays, it seems that perioperative cardiac injury is extremely common, perhaps occurring in up to 20% of patients (if we include patients at high risk based on traditional criteria). This has led to the newly described entity of “myocardial injury after noncardiac surgery” (MINS). MINS patients, diagnosed by troponin elevations, usually are asymptomatic, and many do not meet criteria for any type of MI. But strikingly, as discussed in this issue of the Journal by Horr et al, simply having a postoperative troponin elevation predicts an increased risk of clinical cardiovascular events and a decreased 30-day survival rate.
Adding postoperative troponin measurement to the usual preoperative screening protocol significantly increases our ability to predict delayed cardiovascular events and mortality. As pointed out by Cohn in his accompanying editorial, the benefit, if any, of screening low-risk patients remains to be defined. But an even more important issue, as commented upon in both papers, is what to do when an elevated troponin is detected in a postoperative patient who is otherwise doing perfectly well. Given our current knowledge of the pathophysiology of postoperative MI and the still overall low mortality, it seems unreasonable to immediately take all of these patients to the catheterization suite. Yet with current knowledge of the prognostic significance of troponin elevation, this can’t be ignored. Should all patients receive immediate high-intensity statin therapy, antiplatelet therapy if safe in the specific perioperative setting, and postdischarge physiologic stress studies, or should we “just” take it as a potential high-impact teaching moment and advise patients of their increased cardiovascular risk and offer our usual heart-healthy admonitions?
The confirmed observation that postoperative troponin elevation predicts morbidity and mortality over the subsequent 30 days, and perhaps even longer, has triggered the start of several interventional trials. The results of these will, hopefully, help us to further improve perioperative outcomes.
Remembering that old dogs can still do tricks
More and more we are realizing that we need trials that use hard clinical end points to inform our clinical practice. Several things we used to do based on observational studies have fallen from grace after being evaluated in interventional trials. And faced with the US Food and Drug Administration’s mandate to demonstrate clinical impact, pharmaceutical companies can rarely count on using even well-accepted biomarkers instead of clinical outcomes when trying to bring new drugs to market.
This atmosphere often makes us a bit uncomfortable when prescribing older drugs that have passed the test of time and collective anecdotal experience but not rigorous clinical testing. In some cases this is good, and robust evaluation provides greater confidence in our choice of therapy: witness the demise of digoxin for heart failure.
Many older drugs have never been compared with newer drugs in well-designed trials using hard clinical outcomes and likely never will, owing to cost, marketing, and logistic reasons. But sometimes these trials are done, and the results are surprising. For instance, methotrexate in appropriate doses may actually be comparable to newer and far more expensive tumor necrosis factor inhibitors when used to treat rheumatoid arthritis.
Should we be willing to sometimes accept data on surrogate markers (eg, low-density lipoprotein cholesterol levels, blood pressure, hemoglobin A1c ) or even extensive clinical experience in the absence of hard outcome data when using older, tried-and-true drugs? Markers can mislead: consider the higher number of deaths recorded in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial in the group receiving more aggressive control of their glucose levels.
So we should not be totally sanguine when using older drugs instead of newer ones. But some drugs may have slipped out of our mental formularies yet still have real value in niche or even common settings. Methyldopa remains an effective antihypertensive drug and may be especially useful in peripartum patients. Yet relatively few young physicians know the drug.
And so it may be with chlorthalidone. In this issue of the Journal, Cooney et al remind us not only that this drug is still around, but that it has proven efficacy and, compared with its more popular cousin hydrochlorothiazide, favorable pharmacokinetic properties such as longer action. Not to mention that it was a comparator drug in the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack (ALLHAT) trial.
In our current cost-saving environment, we should remember that some old dogs can still do good tricks.
More and more we are realizing that we need trials that use hard clinical end points to inform our clinical practice. Several things we used to do based on observational studies have fallen from grace after being evaluated in interventional trials. And faced with the US Food and Drug Administration’s mandate to demonstrate clinical impact, pharmaceutical companies can rarely count on using even well-accepted biomarkers instead of clinical outcomes when trying to bring new drugs to market.
This atmosphere often makes us a bit uncomfortable when prescribing older drugs that have passed the test of time and collective anecdotal experience but not rigorous clinical testing. In some cases this is good, and robust evaluation provides greater confidence in our choice of therapy: witness the demise of digoxin for heart failure.
Many older drugs have never been compared with newer drugs in well-designed trials using hard clinical outcomes and likely never will, owing to cost, marketing, and logistic reasons. But sometimes these trials are done, and the results are surprising. For instance, methotrexate in appropriate doses may actually be comparable to newer and far more expensive tumor necrosis factor inhibitors when used to treat rheumatoid arthritis.
Should we be willing to sometimes accept data on surrogate markers (eg, low-density lipoprotein cholesterol levels, blood pressure, hemoglobin A1c ) or even extensive clinical experience in the absence of hard outcome data when using older, tried-and-true drugs? Markers can mislead: consider the higher number of deaths recorded in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial in the group receiving more aggressive control of their glucose levels.
So we should not be totally sanguine when using older drugs instead of newer ones. But some drugs may have slipped out of our mental formularies yet still have real value in niche or even common settings. Methyldopa remains an effective antihypertensive drug and may be especially useful in peripartum patients. Yet relatively few young physicians know the drug.
And so it may be with chlorthalidone. In this issue of the Journal, Cooney et al remind us not only that this drug is still around, but that it has proven efficacy and, compared with its more popular cousin hydrochlorothiazide, favorable pharmacokinetic properties such as longer action. Not to mention that it was a comparator drug in the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack (ALLHAT) trial.
In our current cost-saving environment, we should remember that some old dogs can still do good tricks.
More and more we are realizing that we need trials that use hard clinical end points to inform our clinical practice. Several things we used to do based on observational studies have fallen from grace after being evaluated in interventional trials. And faced with the US Food and Drug Administration’s mandate to demonstrate clinical impact, pharmaceutical companies can rarely count on using even well-accepted biomarkers instead of clinical outcomes when trying to bring new drugs to market.
This atmosphere often makes us a bit uncomfortable when prescribing older drugs that have passed the test of time and collective anecdotal experience but not rigorous clinical testing. In some cases this is good, and robust evaluation provides greater confidence in our choice of therapy: witness the demise of digoxin for heart failure.
Many older drugs have never been compared with newer drugs in well-designed trials using hard clinical outcomes and likely never will, owing to cost, marketing, and logistic reasons. But sometimes these trials are done, and the results are surprising. For instance, methotrexate in appropriate doses may actually be comparable to newer and far more expensive tumor necrosis factor inhibitors when used to treat rheumatoid arthritis.
Should we be willing to sometimes accept data on surrogate markers (eg, low-density lipoprotein cholesterol levels, blood pressure, hemoglobin A1c ) or even extensive clinical experience in the absence of hard outcome data when using older, tried-and-true drugs? Markers can mislead: consider the higher number of deaths recorded in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial in the group receiving more aggressive control of their glucose levels.
So we should not be totally sanguine when using older drugs instead of newer ones. But some drugs may have slipped out of our mental formularies yet still have real value in niche or even common settings. Methyldopa remains an effective antihypertensive drug and may be especially useful in peripartum patients. Yet relatively few young physicians know the drug.
And so it may be with chlorthalidone. In this issue of the Journal, Cooney et al remind us not only that this drug is still around, but that it has proven efficacy and, compared with its more popular cousin hydrochlorothiazide, favorable pharmacokinetic properties such as longer action. Not to mention that it was a comparator drug in the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack (ALLHAT) trial.
In our current cost-saving environment, we should remember that some old dogs can still do good tricks.
The cohabitation of art and genomic science
The art of medicine includes picking the right drug for the right patient, especially when we can choose between different classes of efficacious therapies. But, in view of our growing understanding of the human genome, can science replace art?
That question is part of the promise of pharmacogenetics, the study of how inter-individual genetic differences influence a patient’s response to a specific drug. A patient’s genome dictates the expression of specific enzymes that metabolize a drug with various efficiencies: variant alleles may result in slightly different proteins that express different enzymatic activity, ie, different substrate affinities for a drug resulting in more or less efficient metabolism. Genomic differences may also dictate whether a specific biochemical pathway is dominant in generating a specific pathophysiologic response, in which case drugs that affect that pathway may be strikingly effective. This may partly explain the various responses to different antihypertensive drugs.
Another less well-understood example of pharmacogenetics is the link between specific HLA haplotypes and a dramatic increase in allergic reactions to specific medications, such as the link between HLA-B*57:01 and abacavir hypersensitivity.
In this issue of the Journal, DiPiero et al discuss thiopurine methyltransferase (TPMT), an enzyme responsible for the degradation of azathioprine, and how knowing the genetically determined relative activity of this enzyme should influence our initial dosing of this and related drugs. Patients with certain variant alleles of TPMT degrade azathioprine more slowly, and these patients are at higher risk of myelosuppressive toxicity from the drug when it is given at the full weight-based dose. The TPMT test is expensive but not prohibitively so, and it would seem that genomic testing is a reasonable clinical and cost-effective option.
As in the abacavir scenario noted above, genomic-based dosing of azathioprine makes scientific sense and offers proof of principle for the validity of pharmacogenomics. But is it truly a clinical game-changer?
The answer depends in part on how the prescribing physician doses the drug, which depends in part on what disease is being treated, how fast the drug needs to be at full dose, and whether there are equally effective alternatives. Recommendations have been offered that state if TPMT activity is normal, we can start at the usual maintenance dose of 1.5 to 2 mg/kg/day (or occasionally more). But if the patient is heterozygous for the wild-type gene and thus is a slower drug metabolizer, then initial dosing “should” be reduced to 25 to 50 mg/day, with close observation of the white blood cell count as the dose is slowly increased to the target. The very rare patient who is homozygous for a non–wild-type allele should not be given the drug.
My usual practice has been to start patients on 50 mg or less daily and slowly titrate up, asking them how they are tolerating the drug and watching the white count—notably, the same approach to be taken if I had done genotyping before starting the drug and had found the patient to be heterozygous for the TPMT gene.
Interestingly, one pragmatic clinical trial tested whether genotyping patients before starting azathioprine—with subsequent suggested dosing of the drug based on the genotype as above—was safer and cheaper than letting physicians dose as they chose.1 It turned out that physicians participating in this study still dosed their patients conservatively. Even knowing that they might be able to give full doses from the start in patients with normal TPMT activity, many chose not to. I assume that many of those physicians felt as I do that there was no urgency in reaching the presumed-to-be-effective full weight-based therapeutic dose. (We don’t have a good clinical marker of azathioprine’s efficacy). At 4 months, the maintenance dose was about the same in all groups.
We have robust evidence to support the role of pharmacogenetics in informing the dosing of several medications, more than just the ones I have mentioned here. And in the right settings, we should use pharmacogenetic testing to limit toxicity and perhaps enhance efficacy in our drug selection. As the field moves rapidly forward, we will have many opportunities to improve clinical care by using our patients’ genomic information.
But like it or bemoan it, even when we have science in the house, the art of medicine still plays a role in our clinical decisions.
- Thompson AJ, Newman WG, Elliott RA, Roberts SA, Tricker K, Payne K. The cost-effectiveness of a pharmacogenetic test: a trial-based evaluation of TPMT genotyping for azathioprine. Value Health 2014; 17:22–33.
The art of medicine includes picking the right drug for the right patient, especially when we can choose between different classes of efficacious therapies. But, in view of our growing understanding of the human genome, can science replace art?
That question is part of the promise of pharmacogenetics, the study of how inter-individual genetic differences influence a patient’s response to a specific drug. A patient’s genome dictates the expression of specific enzymes that metabolize a drug with various efficiencies: variant alleles may result in slightly different proteins that express different enzymatic activity, ie, different substrate affinities for a drug resulting in more or less efficient metabolism. Genomic differences may also dictate whether a specific biochemical pathway is dominant in generating a specific pathophysiologic response, in which case drugs that affect that pathway may be strikingly effective. This may partly explain the various responses to different antihypertensive drugs.
Another less well-understood example of pharmacogenetics is the link between specific HLA haplotypes and a dramatic increase in allergic reactions to specific medications, such as the link between HLA-B*57:01 and abacavir hypersensitivity.
In this issue of the Journal, DiPiero et al discuss thiopurine methyltransferase (TPMT), an enzyme responsible for the degradation of azathioprine, and how knowing the genetically determined relative activity of this enzyme should influence our initial dosing of this and related drugs. Patients with certain variant alleles of TPMT degrade azathioprine more slowly, and these patients are at higher risk of myelosuppressive toxicity from the drug when it is given at the full weight-based dose. The TPMT test is expensive but not prohibitively so, and it would seem that genomic testing is a reasonable clinical and cost-effective option.
As in the abacavir scenario noted above, genomic-based dosing of azathioprine makes scientific sense and offers proof of principle for the validity of pharmacogenomics. But is it truly a clinical game-changer?
The answer depends in part on how the prescribing physician doses the drug, which depends in part on what disease is being treated, how fast the drug needs to be at full dose, and whether there are equally effective alternatives. Recommendations have been offered that state if TPMT activity is normal, we can start at the usual maintenance dose of 1.5 to 2 mg/kg/day (or occasionally more). But if the patient is heterozygous for the wild-type gene and thus is a slower drug metabolizer, then initial dosing “should” be reduced to 25 to 50 mg/day, with close observation of the white blood cell count as the dose is slowly increased to the target. The very rare patient who is homozygous for a non–wild-type allele should not be given the drug.
My usual practice has been to start patients on 50 mg or less daily and slowly titrate up, asking them how they are tolerating the drug and watching the white count—notably, the same approach to be taken if I had done genotyping before starting the drug and had found the patient to be heterozygous for the TPMT gene.
Interestingly, one pragmatic clinical trial tested whether genotyping patients before starting azathioprine—with subsequent suggested dosing of the drug based on the genotype as above—was safer and cheaper than letting physicians dose as they chose.1 It turned out that physicians participating in this study still dosed their patients conservatively. Even knowing that they might be able to give full doses from the start in patients with normal TPMT activity, many chose not to. I assume that many of those physicians felt as I do that there was no urgency in reaching the presumed-to-be-effective full weight-based therapeutic dose. (We don’t have a good clinical marker of azathioprine’s efficacy). At 4 months, the maintenance dose was about the same in all groups.
We have robust evidence to support the role of pharmacogenetics in informing the dosing of several medications, more than just the ones I have mentioned here. And in the right settings, we should use pharmacogenetic testing to limit toxicity and perhaps enhance efficacy in our drug selection. As the field moves rapidly forward, we will have many opportunities to improve clinical care by using our patients’ genomic information.
But like it or bemoan it, even when we have science in the house, the art of medicine still plays a role in our clinical decisions.
The art of medicine includes picking the right drug for the right patient, especially when we can choose between different classes of efficacious therapies. But, in view of our growing understanding of the human genome, can science replace art?
That question is part of the promise of pharmacogenetics, the study of how inter-individual genetic differences influence a patient’s response to a specific drug. A patient’s genome dictates the expression of specific enzymes that metabolize a drug with various efficiencies: variant alleles may result in slightly different proteins that express different enzymatic activity, ie, different substrate affinities for a drug resulting in more or less efficient metabolism. Genomic differences may also dictate whether a specific biochemical pathway is dominant in generating a specific pathophysiologic response, in which case drugs that affect that pathway may be strikingly effective. This may partly explain the various responses to different antihypertensive drugs.
Another less well-understood example of pharmacogenetics is the link between specific HLA haplotypes and a dramatic increase in allergic reactions to specific medications, such as the link between HLA-B*57:01 and abacavir hypersensitivity.
In this issue of the Journal, DiPiero et al discuss thiopurine methyltransferase (TPMT), an enzyme responsible for the degradation of azathioprine, and how knowing the genetically determined relative activity of this enzyme should influence our initial dosing of this and related drugs. Patients with certain variant alleles of TPMT degrade azathioprine more slowly, and these patients are at higher risk of myelosuppressive toxicity from the drug when it is given at the full weight-based dose. The TPMT test is expensive but not prohibitively so, and it would seem that genomic testing is a reasonable clinical and cost-effective option.
As in the abacavir scenario noted above, genomic-based dosing of azathioprine makes scientific sense and offers proof of principle for the validity of pharmacogenomics. But is it truly a clinical game-changer?
The answer depends in part on how the prescribing physician doses the drug, which depends in part on what disease is being treated, how fast the drug needs to be at full dose, and whether there are equally effective alternatives. Recommendations have been offered that state if TPMT activity is normal, we can start at the usual maintenance dose of 1.5 to 2 mg/kg/day (or occasionally more). But if the patient is heterozygous for the wild-type gene and thus is a slower drug metabolizer, then initial dosing “should” be reduced to 25 to 50 mg/day, with close observation of the white blood cell count as the dose is slowly increased to the target. The very rare patient who is homozygous for a non–wild-type allele should not be given the drug.
My usual practice has been to start patients on 50 mg or less daily and slowly titrate up, asking them how they are tolerating the drug and watching the white count—notably, the same approach to be taken if I had done genotyping before starting the drug and had found the patient to be heterozygous for the TPMT gene.
Interestingly, one pragmatic clinical trial tested whether genotyping patients before starting azathioprine—with subsequent suggested dosing of the drug based on the genotype as above—was safer and cheaper than letting physicians dose as they chose.1 It turned out that physicians participating in this study still dosed their patients conservatively. Even knowing that they might be able to give full doses from the start in patients with normal TPMT activity, many chose not to. I assume that many of those physicians felt as I do that there was no urgency in reaching the presumed-to-be-effective full weight-based therapeutic dose. (We don’t have a good clinical marker of azathioprine’s efficacy). At 4 months, the maintenance dose was about the same in all groups.
We have robust evidence to support the role of pharmacogenetics in informing the dosing of several medications, more than just the ones I have mentioned here. And in the right settings, we should use pharmacogenetic testing to limit toxicity and perhaps enhance efficacy in our drug selection. As the field moves rapidly forward, we will have many opportunities to improve clinical care by using our patients’ genomic information.
But like it or bemoan it, even when we have science in the house, the art of medicine still plays a role in our clinical decisions.
- Thompson AJ, Newman WG, Elliott RA, Roberts SA, Tricker K, Payne K. The cost-effectiveness of a pharmacogenetic test: a trial-based evaluation of TPMT genotyping for azathioprine. Value Health 2014; 17:22–33.
- Thompson AJ, Newman WG, Elliott RA, Roberts SA, Tricker K, Payne K. The cost-effectiveness of a pharmacogenetic test: a trial-based evaluation of TPMT genotyping for azathioprine. Value Health 2014; 17:22–33.
The vaccine safety belt
I’m not sure if I recall seeing kids in long lines outside of school waiting to receive the polio vaccine, or if these are just memories from movie film clips. I’ve never seen a patient with an active polio infection, and I’ve seen only a few with postpolio syndromes. I’ve never seen a patient with tetanus, smallpox, diphtheria, or typical measles. I’ve seen three cases of pertussis that I know of, and the long delay in diagnosing the first one (my wife) was clearly because at that time clinicians caring for adults were not attuned to a disease that had virtually disappeared from the American landscape. Once I was sensitized to its presence, it was far easier to make the diagnosis in the second case I encountered (myself). The list of infectious diseases that have almost vanished in the last 75 years with the development of specific vaccines is not long, but it is striking. We can easily lose sight of that when focusing on the less-than-perfect effectiveness of the pneumococcal and annual influenza vaccines.
My message in recounting these observations is that, growing up in the traditional Western medical establishment, I find it hard from a historical perspective to view vaccines as anything but a positive contribution to our public and personal health. And yet a vocal minority, generally outside the medical establishment, maintains that vaccination is a potentially dangerous practice to be avoided whenever possible. Their biological arguments are tenuous and rarely supported by controlled clinical outcomes or observational data. The elimination of trace amounts of mercury-containing preservatives from some vaccines has done little to dampen their concerns. The arguments against routine vaccination and mandated vaccination of schoolchildren to maintain herd immunity have acquired a libertarian tone. While I may share the philosophy behind their perspective—for example, I wear my seat belt while driving, but I don’t think I should be fined if I don’t—my not wearing a seat belt does not increase the chance that those who encounter me on a plane, in a movie theater, or at an amusement park will die when subsequently driving their car.
In all likelihood, I will retire from medicine before I ever see a case of typical diphtheria. I don’t think that is an accident of nature or the effect of better hygiene. I’m hoping that the generation of physicians to follow will see far less cervical cancer, and that physicians in Asia will see far less hepatitis B-associated hepatocellular carcinoma as a result of effective vaccination against the viruses associated with these cancers.
As Drs. Faria Farhat and Glenn Wortmann and Dr. Atul Khasnis discuss in their papers in this issue of the Journal, we have more to learn about how to most effectively use vaccines in special populations. It is clearly not a one-strategy-fits-all world. The decision to vaccinate these patients is usually less about public health than about the health of the individual patient.
The real-world effectiveness of many vaccines is less than it appeared to be in controlled clinical trials. Unfortunately, the patients who most need protection against infections, the immunosuppressed, have a blunted response to many vaccines and perhaps should not receive live vaccines. But we have too little evidence on how and when to optimally vaccinate these patients. It still feels a bit like a casino, not a clinic, when I discuss with a modestly immunosuppressed patient whether he or she should be vaccinated with a live vaccine to reduce the risk of shingles and postherpetic neuralgia.
If we have the opportunity, vaccinating before starting immunosuppressive drugs (or before splenectomy) makes sense. But often that is not an option. We are frequently faced with the need to extrapolate efficacy and safety experiences from clinical trials of vaccines that are conducted with healthier patients and with relatively short follow-up. The two vaccination papers in this issue of the Journal provide us with useful information about immunologic and other issues involved when making the decision to vaccinate special patient populations.
Buckle up wisely.
I’m not sure if I recall seeing kids in long lines outside of school waiting to receive the polio vaccine, or if these are just memories from movie film clips. I’ve never seen a patient with an active polio infection, and I’ve seen only a few with postpolio syndromes. I’ve never seen a patient with tetanus, smallpox, diphtheria, or typical measles. I’ve seen three cases of pertussis that I know of, and the long delay in diagnosing the first one (my wife) was clearly because at that time clinicians caring for adults were not attuned to a disease that had virtually disappeared from the American landscape. Once I was sensitized to its presence, it was far easier to make the diagnosis in the second case I encountered (myself). The list of infectious diseases that have almost vanished in the last 75 years with the development of specific vaccines is not long, but it is striking. We can easily lose sight of that when focusing on the less-than-perfect effectiveness of the pneumococcal and annual influenza vaccines.
My message in recounting these observations is that, growing up in the traditional Western medical establishment, I find it hard from a historical perspective to view vaccines as anything but a positive contribution to our public and personal health. And yet a vocal minority, generally outside the medical establishment, maintains that vaccination is a potentially dangerous practice to be avoided whenever possible. Their biological arguments are tenuous and rarely supported by controlled clinical outcomes or observational data. The elimination of trace amounts of mercury-containing preservatives from some vaccines has done little to dampen their concerns. The arguments against routine vaccination and mandated vaccination of schoolchildren to maintain herd immunity have acquired a libertarian tone. While I may share the philosophy behind their perspective—for example, I wear my seat belt while driving, but I don’t think I should be fined if I don’t—my not wearing a seat belt does not increase the chance that those who encounter me on a plane, in a movie theater, or at an amusement park will die when subsequently driving their car.
In all likelihood, I will retire from medicine before I ever see a case of typical diphtheria. I don’t think that is an accident of nature or the effect of better hygiene. I’m hoping that the generation of physicians to follow will see far less cervical cancer, and that physicians in Asia will see far less hepatitis B-associated hepatocellular carcinoma as a result of effective vaccination against the viruses associated with these cancers.
As Drs. Faria Farhat and Glenn Wortmann and Dr. Atul Khasnis discuss in their papers in this issue of the Journal, we have more to learn about how to most effectively use vaccines in special populations. It is clearly not a one-strategy-fits-all world. The decision to vaccinate these patients is usually less about public health than about the health of the individual patient.
The real-world effectiveness of many vaccines is less than it appeared to be in controlled clinical trials. Unfortunately, the patients who most need protection against infections, the immunosuppressed, have a blunted response to many vaccines and perhaps should not receive live vaccines. But we have too little evidence on how and when to optimally vaccinate these patients. It still feels a bit like a casino, not a clinic, when I discuss with a modestly immunosuppressed patient whether he or she should be vaccinated with a live vaccine to reduce the risk of shingles and postherpetic neuralgia.
If we have the opportunity, vaccinating before starting immunosuppressive drugs (or before splenectomy) makes sense. But often that is not an option. We are frequently faced with the need to extrapolate efficacy and safety experiences from clinical trials of vaccines that are conducted with healthier patients and with relatively short follow-up. The two vaccination papers in this issue of the Journal provide us with useful information about immunologic and other issues involved when making the decision to vaccinate special patient populations.
Buckle up wisely.
I’m not sure if I recall seeing kids in long lines outside of school waiting to receive the polio vaccine, or if these are just memories from movie film clips. I’ve never seen a patient with an active polio infection, and I’ve seen only a few with postpolio syndromes. I’ve never seen a patient with tetanus, smallpox, diphtheria, or typical measles. I’ve seen three cases of pertussis that I know of, and the long delay in diagnosing the first one (my wife) was clearly because at that time clinicians caring for adults were not attuned to a disease that had virtually disappeared from the American landscape. Once I was sensitized to its presence, it was far easier to make the diagnosis in the second case I encountered (myself). The list of infectious diseases that have almost vanished in the last 75 years with the development of specific vaccines is not long, but it is striking. We can easily lose sight of that when focusing on the less-than-perfect effectiveness of the pneumococcal and annual influenza vaccines.
My message in recounting these observations is that, growing up in the traditional Western medical establishment, I find it hard from a historical perspective to view vaccines as anything but a positive contribution to our public and personal health. And yet a vocal minority, generally outside the medical establishment, maintains that vaccination is a potentially dangerous practice to be avoided whenever possible. Their biological arguments are tenuous and rarely supported by controlled clinical outcomes or observational data. The elimination of trace amounts of mercury-containing preservatives from some vaccines has done little to dampen their concerns. The arguments against routine vaccination and mandated vaccination of schoolchildren to maintain herd immunity have acquired a libertarian tone. While I may share the philosophy behind their perspective—for example, I wear my seat belt while driving, but I don’t think I should be fined if I don’t—my not wearing a seat belt does not increase the chance that those who encounter me on a plane, in a movie theater, or at an amusement park will die when subsequently driving their car.
In all likelihood, I will retire from medicine before I ever see a case of typical diphtheria. I don’t think that is an accident of nature or the effect of better hygiene. I’m hoping that the generation of physicians to follow will see far less cervical cancer, and that physicians in Asia will see far less hepatitis B-associated hepatocellular carcinoma as a result of effective vaccination against the viruses associated with these cancers.
As Drs. Faria Farhat and Glenn Wortmann and Dr. Atul Khasnis discuss in their papers in this issue of the Journal, we have more to learn about how to most effectively use vaccines in special populations. It is clearly not a one-strategy-fits-all world. The decision to vaccinate these patients is usually less about public health than about the health of the individual patient.
The real-world effectiveness of many vaccines is less than it appeared to be in controlled clinical trials. Unfortunately, the patients who most need protection against infections, the immunosuppressed, have a blunted response to many vaccines and perhaps should not receive live vaccines. But we have too little evidence on how and when to optimally vaccinate these patients. It still feels a bit like a casino, not a clinic, when I discuss with a modestly immunosuppressed patient whether he or she should be vaccinated with a live vaccine to reduce the risk of shingles and postherpetic neuralgia.
If we have the opportunity, vaccinating before starting immunosuppressive drugs (or before splenectomy) makes sense. But often that is not an option. We are frequently faced with the need to extrapolate efficacy and safety experiences from clinical trials of vaccines that are conducted with healthier patients and with relatively short follow-up. The two vaccination papers in this issue of the Journal provide us with useful information about immunologic and other issues involved when making the decision to vaccinate special patient populations.
Buckle up wisely.
Pericarditis as a window into the mind of the internist
In this issue of the Journal, Alraies et al comment on how extensively we should look for the cause of an initial episode of pericarditis.
The pericardium, like the pleura, peritoneum, and synovium, can be affected in a number of inflammatory and infectious disorders. The mechanisms by which these tissues are affected are not fully understood, nor is the process by which different diseases seem to selectively target the joint or pericardium. Why are the joints only minimally inflamed in systemic lupus erythematosus (SLE), while lupus pericarditis, in the uncommon occurrence of significant effusion, is often quite inflammatory, with a neutrophil predominance in the fluid? Why is pericardial involvement so often demonstrable by imaging in patients with SLE and rheumatoid arthritis, yet an acute pericarditis presentation with audible pericardial rubs is so seldom recognized?
Although nuances like these are not well understood, in medical school we all learned the association between connective tissue disease and pericarditis. The importance of recalling these associations is repeatedly reinforced during residency and in disease-focused review articles. During my training, woe to the resident who presented a patient at rounds who was admitted with unexplained pericarditis and was not evaluated for SLE with at least an antinuclear antibody (ANA) test, even if there were no other features to suggest the disease. Ordering the test reflected that we knew that, occasionally, pericardial disease is the sole presenting manifestation of lupus.
Such is the plight of the internist. Pericarditis can be the initial manifestation of an autoimmune or inflammatory disease, but this is more often relevant on certification examinations and in medical education than in everyday practice. We are now charged with ordering tests in a more cost-effective manner than in the past. This means that we should not order tests simply because of an epidemiologic association, but only when the result is likely to influence decisions about testing or treatment. But that creates the intellectual dissonance of knowing of a potential relationship (which someone, someday, may challenge us about) but not looking for it. There is an inherent conflict between satisfying intellectual curiosity and the need to be thorough while at the same time containing costs and avoiding the potential harm inherent in overtesting.
A partial solution is to try to define the immediate risk of not recognizing a life- or organ-threatening disease process that can be suggested by a positive nonspecific test (eg, ANA), and to refine the pretest likelihood of specific diagnoses by obtaining an accurate and complete history and performing a focused physical examination. For example, if we suspect that SLE may be the cause of an initial episode of symptomatic pericarditis, our initial evaluation should focus on the patient’s clinical picture. Is there bitemporal hair-thinning? New-onset Raynaud symptoms? Mild generalized adenopathy or lymphopenia? A borderline-low platelet count, or any proteinuria or microhematuria (which should warrant a prompt examination of a fresh urine sediment sample by a physician at the point of care to look for cellular casts indicative of glomerulonephritis)?
As internists, we should try to fulfill our need to be thorough and compulsive by using our honed skills as careful observers and historians—taking a careful history from the patient and family, performing a focused physical examination, and appropriately using disease-defining or staging tests before ordering less specific serologic or other tests. Practicing medicine in a conscientious and compulsive manner does not mean that every diagnostic possibility must be tested for at initial presentation.
Reading how experienced clinicians approach the problem of pericarditis in a specialized clinic provides a useful prompt to self-assess how we approach analogous clinical scenarios.
In this issue of the Journal, Alraies et al comment on how extensively we should look for the cause of an initial episode of pericarditis.
The pericardium, like the pleura, peritoneum, and synovium, can be affected in a number of inflammatory and infectious disorders. The mechanisms by which these tissues are affected are not fully understood, nor is the process by which different diseases seem to selectively target the joint or pericardium. Why are the joints only minimally inflamed in systemic lupus erythematosus (SLE), while lupus pericarditis, in the uncommon occurrence of significant effusion, is often quite inflammatory, with a neutrophil predominance in the fluid? Why is pericardial involvement so often demonstrable by imaging in patients with SLE and rheumatoid arthritis, yet an acute pericarditis presentation with audible pericardial rubs is so seldom recognized?
Although nuances like these are not well understood, in medical school we all learned the association between connective tissue disease and pericarditis. The importance of recalling these associations is repeatedly reinforced during residency and in disease-focused review articles. During my training, woe to the resident who presented a patient at rounds who was admitted with unexplained pericarditis and was not evaluated for SLE with at least an antinuclear antibody (ANA) test, even if there were no other features to suggest the disease. Ordering the test reflected that we knew that, occasionally, pericardial disease is the sole presenting manifestation of lupus.
Such is the plight of the internist. Pericarditis can be the initial manifestation of an autoimmune or inflammatory disease, but this is more often relevant on certification examinations and in medical education than in everyday practice. We are now charged with ordering tests in a more cost-effective manner than in the past. This means that we should not order tests simply because of an epidemiologic association, but only when the result is likely to influence decisions about testing or treatment. But that creates the intellectual dissonance of knowing of a potential relationship (which someone, someday, may challenge us about) but not looking for it. There is an inherent conflict between satisfying intellectual curiosity and the need to be thorough while at the same time containing costs and avoiding the potential harm inherent in overtesting.
A partial solution is to try to define the immediate risk of not recognizing a life- or organ-threatening disease process that can be suggested by a positive nonspecific test (eg, ANA), and to refine the pretest likelihood of specific diagnoses by obtaining an accurate and complete history and performing a focused physical examination. For example, if we suspect that SLE may be the cause of an initial episode of symptomatic pericarditis, our initial evaluation should focus on the patient’s clinical picture. Is there bitemporal hair-thinning? New-onset Raynaud symptoms? Mild generalized adenopathy or lymphopenia? A borderline-low platelet count, or any proteinuria or microhematuria (which should warrant a prompt examination of a fresh urine sediment sample by a physician at the point of care to look for cellular casts indicative of glomerulonephritis)?
As internists, we should try to fulfill our need to be thorough and compulsive by using our honed skills as careful observers and historians—taking a careful history from the patient and family, performing a focused physical examination, and appropriately using disease-defining or staging tests before ordering less specific serologic or other tests. Practicing medicine in a conscientious and compulsive manner does not mean that every diagnostic possibility must be tested for at initial presentation.
Reading how experienced clinicians approach the problem of pericarditis in a specialized clinic provides a useful prompt to self-assess how we approach analogous clinical scenarios.
In this issue of the Journal, Alraies et al comment on how extensively we should look for the cause of an initial episode of pericarditis.
The pericardium, like the pleura, peritoneum, and synovium, can be affected in a number of inflammatory and infectious disorders. The mechanisms by which these tissues are affected are not fully understood, nor is the process by which different diseases seem to selectively target the joint or pericardium. Why are the joints only minimally inflamed in systemic lupus erythematosus (SLE), while lupus pericarditis, in the uncommon occurrence of significant effusion, is often quite inflammatory, with a neutrophil predominance in the fluid? Why is pericardial involvement so often demonstrable by imaging in patients with SLE and rheumatoid arthritis, yet an acute pericarditis presentation with audible pericardial rubs is so seldom recognized?
Although nuances like these are not well understood, in medical school we all learned the association between connective tissue disease and pericarditis. The importance of recalling these associations is repeatedly reinforced during residency and in disease-focused review articles. During my training, woe to the resident who presented a patient at rounds who was admitted with unexplained pericarditis and was not evaluated for SLE with at least an antinuclear antibody (ANA) test, even if there were no other features to suggest the disease. Ordering the test reflected that we knew that, occasionally, pericardial disease is the sole presenting manifestation of lupus.
Such is the plight of the internist. Pericarditis can be the initial manifestation of an autoimmune or inflammatory disease, but this is more often relevant on certification examinations and in medical education than in everyday practice. We are now charged with ordering tests in a more cost-effective manner than in the past. This means that we should not order tests simply because of an epidemiologic association, but only when the result is likely to influence decisions about testing or treatment. But that creates the intellectual dissonance of knowing of a potential relationship (which someone, someday, may challenge us about) but not looking for it. There is an inherent conflict between satisfying intellectual curiosity and the need to be thorough while at the same time containing costs and avoiding the potential harm inherent in overtesting.
A partial solution is to try to define the immediate risk of not recognizing a life- or organ-threatening disease process that can be suggested by a positive nonspecific test (eg, ANA), and to refine the pretest likelihood of specific diagnoses by obtaining an accurate and complete history and performing a focused physical examination. For example, if we suspect that SLE may be the cause of an initial episode of symptomatic pericarditis, our initial evaluation should focus on the patient’s clinical picture. Is there bitemporal hair-thinning? New-onset Raynaud symptoms? Mild generalized adenopathy or lymphopenia? A borderline-low platelet count, or any proteinuria or microhematuria (which should warrant a prompt examination of a fresh urine sediment sample by a physician at the point of care to look for cellular casts indicative of glomerulonephritis)?
As internists, we should try to fulfill our need to be thorough and compulsive by using our honed skills as careful observers and historians—taking a careful history from the patient and family, performing a focused physical examination, and appropriately using disease-defining or staging tests before ordering less specific serologic or other tests. Practicing medicine in a conscientious and compulsive manner does not mean that every diagnostic possibility must be tested for at initial presentation.
Reading how experienced clinicians approach the problem of pericarditis in a specialized clinic provides a useful prompt to self-assess how we approach analogous clinical scenarios.
The art and science of clinical medicine and editorial policy
The article by Dr. Alison Colantino et al in this issue on when to resume anticoagulation after a hemorrhagic event is relevant to the discussion of clinical decision-making that I started here last month. My thoughts then were prompted by a commentary by Dr. Vinay Prasad on incorporating appropriate study outcomes in clinical decision-making (Cleve Clin J Med 2015; 82:146–150).
In the clinic or hospital, we make many decisions without being able to cite specific applicable clinical studies. I base some decisions on my overall impression from the literature (including formal trials), some on general recall of a specific study (which I hopefully either find time to review afterwards, or ask one of our trainees to read and discuss with our team the next day), and others on my knowledge of clinical guidelines or clearly accepted practice. Most clinical decisions are made without any directly applicable data from available clinical studies. This is the “art” of medicine.
Should this art make its way into our clinical journals, and if so, how extensively, and how should it be framed? It is relatively easy when we are talking about the science of clinical practice. Journals receive the (hopefully complete) data, get peer reviews to improve the paper, and publish it with the authors’ opinions presented in the discussion section. Then, dialogue ensues in the published literature, in educational lectures, and in blogs posted on the Internet. But where does the art go? Does it belong in our traditionally conservative textbooks or newer go-to online resources, which emphasize the need for authors to provide updated specific references for their treatment recommendations? We believe that after our best efforts at peer review it is appropriate to publish it in the CCJM because hopefully it can provide additional perspective on how we deliver care to our patients.
In the arena of new therapies, regulatory approval requires hard data documenting efficacy and safety. And that often leaves me without approved or sometimes even “proven effective” therapies to use when treating patients with relatively uncommon conditions, such as refractory uveitis with threatened visual loss or idiopathic aortitis. Yet I still need to treat the patient.
Another aspect of the art of medicine relates to how best to use therapies that have been approved. We have had antibiotics for many decades, but data are still being generated on how long to treat specific infections, and relatively few scenarios have been studied. Huge media coverage and (mostly) appropriate hype were generated over the need to treat patients with postmenopausal osteoporosis as diagnosed by dual-energy x-ray absorptiometry. But even after evidence emerged regarding atypical femoral fractures in patients receiving long-term bisphosphonate therapy, the question of how long treatment should continue remains more art than science.
The field of anticoagulation has seen many recent advances. We have new heparins, new target-specific oral anticoagulants, and a lot of new science on the natural history of some thrombotic disorders and the efficacy and safety of these new agents. But how long to treat specific thrombotic conditions, which agent to use, how intense the anticoagulation needs to be, when to use bridging therapy, and, as discussed by Dr. Colantino et al, when to resume anticoagulation after a hemorrhagic event mostly remain part of the art of medicine.
I highlight the Colantino paper in the context of both clinical and editorial decision-making because it is an example of experienced clinical authors discussing their solutions to thorny clinical scenarios we often face with inadequate data. While some journals avoid this approach, we embrace the opportunity to provide thoughtful expert opinions to our readers. We push authors from the start of the editorial process and through aggressive peer review to provide evidence to support their practice recommendations when appropriate. But we also encourage them to make recommendations and describe their own decision-making process in situations that may not be fully described in the literature.
Most of our readers do not have ready access to consultants who have had years of experience within multidisciplinary teams at referral institutions regularly managing patients with permutations of these complex clinical problems. Though generic consultation advice must be evaluated within the context of the specific patient, we hope that by framing the clinical issues with relevant clinical science the opinions of experienced authors will be of use in guiding your (and my) approach to similar clinical scenarios.
If you think we are not striking the right balance between the science and the art of medical practice, please let me know.
The article by Dr. Alison Colantino et al in this issue on when to resume anticoagulation after a hemorrhagic event is relevant to the discussion of clinical decision-making that I started here last month. My thoughts then were prompted by a commentary by Dr. Vinay Prasad on incorporating appropriate study outcomes in clinical decision-making (Cleve Clin J Med 2015; 82:146–150).
In the clinic or hospital, we make many decisions without being able to cite specific applicable clinical studies. I base some decisions on my overall impression from the literature (including formal trials), some on general recall of a specific study (which I hopefully either find time to review afterwards, or ask one of our trainees to read and discuss with our team the next day), and others on my knowledge of clinical guidelines or clearly accepted practice. Most clinical decisions are made without any directly applicable data from available clinical studies. This is the “art” of medicine.
Should this art make its way into our clinical journals, and if so, how extensively, and how should it be framed? It is relatively easy when we are talking about the science of clinical practice. Journals receive the (hopefully complete) data, get peer reviews to improve the paper, and publish it with the authors’ opinions presented in the discussion section. Then, dialogue ensues in the published literature, in educational lectures, and in blogs posted on the Internet. But where does the art go? Does it belong in our traditionally conservative textbooks or newer go-to online resources, which emphasize the need for authors to provide updated specific references for their treatment recommendations? We believe that after our best efforts at peer review it is appropriate to publish it in the CCJM because hopefully it can provide additional perspective on how we deliver care to our patients.
In the arena of new therapies, regulatory approval requires hard data documenting efficacy and safety. And that often leaves me without approved or sometimes even “proven effective” therapies to use when treating patients with relatively uncommon conditions, such as refractory uveitis with threatened visual loss or idiopathic aortitis. Yet I still need to treat the patient.
Another aspect of the art of medicine relates to how best to use therapies that have been approved. We have had antibiotics for many decades, but data are still being generated on how long to treat specific infections, and relatively few scenarios have been studied. Huge media coverage and (mostly) appropriate hype were generated over the need to treat patients with postmenopausal osteoporosis as diagnosed by dual-energy x-ray absorptiometry. But even after evidence emerged regarding atypical femoral fractures in patients receiving long-term bisphosphonate therapy, the question of how long treatment should continue remains more art than science.
The field of anticoagulation has seen many recent advances. We have new heparins, new target-specific oral anticoagulants, and a lot of new science on the natural history of some thrombotic disorders and the efficacy and safety of these new agents. But how long to treat specific thrombotic conditions, which agent to use, how intense the anticoagulation needs to be, when to use bridging therapy, and, as discussed by Dr. Colantino et al, when to resume anticoagulation after a hemorrhagic event mostly remain part of the art of medicine.
I highlight the Colantino paper in the context of both clinical and editorial decision-making because it is an example of experienced clinical authors discussing their solutions to thorny clinical scenarios we often face with inadequate data. While some journals avoid this approach, we embrace the opportunity to provide thoughtful expert opinions to our readers. We push authors from the start of the editorial process and through aggressive peer review to provide evidence to support their practice recommendations when appropriate. But we also encourage them to make recommendations and describe their own decision-making process in situations that may not be fully described in the literature.
Most of our readers do not have ready access to consultants who have had years of experience within multidisciplinary teams at referral institutions regularly managing patients with permutations of these complex clinical problems. Though generic consultation advice must be evaluated within the context of the specific patient, we hope that by framing the clinical issues with relevant clinical science the opinions of experienced authors will be of use in guiding your (and my) approach to similar clinical scenarios.
If you think we are not striking the right balance between the science and the art of medical practice, please let me know.
The article by Dr. Alison Colantino et al in this issue on when to resume anticoagulation after a hemorrhagic event is relevant to the discussion of clinical decision-making that I started here last month. My thoughts then were prompted by a commentary by Dr. Vinay Prasad on incorporating appropriate study outcomes in clinical decision-making (Cleve Clin J Med 2015; 82:146–150).
In the clinic or hospital, we make many decisions without being able to cite specific applicable clinical studies. I base some decisions on my overall impression from the literature (including formal trials), some on general recall of a specific study (which I hopefully either find time to review afterwards, or ask one of our trainees to read and discuss with our team the next day), and others on my knowledge of clinical guidelines or clearly accepted practice. Most clinical decisions are made without any directly applicable data from available clinical studies. This is the “art” of medicine.
Should this art make its way into our clinical journals, and if so, how extensively, and how should it be framed? It is relatively easy when we are talking about the science of clinical practice. Journals receive the (hopefully complete) data, get peer reviews to improve the paper, and publish it with the authors’ opinions presented in the discussion section. Then, dialogue ensues in the published literature, in educational lectures, and in blogs posted on the Internet. But where does the art go? Does it belong in our traditionally conservative textbooks or newer go-to online resources, which emphasize the need for authors to provide updated specific references for their treatment recommendations? We believe that after our best efforts at peer review it is appropriate to publish it in the CCJM because hopefully it can provide additional perspective on how we deliver care to our patients.
In the arena of new therapies, regulatory approval requires hard data documenting efficacy and safety. And that often leaves me without approved or sometimes even “proven effective” therapies to use when treating patients with relatively uncommon conditions, such as refractory uveitis with threatened visual loss or idiopathic aortitis. Yet I still need to treat the patient.
Another aspect of the art of medicine relates to how best to use therapies that have been approved. We have had antibiotics for many decades, but data are still being generated on how long to treat specific infections, and relatively few scenarios have been studied. Huge media coverage and (mostly) appropriate hype were generated over the need to treat patients with postmenopausal osteoporosis as diagnosed by dual-energy x-ray absorptiometry. But even after evidence emerged regarding atypical femoral fractures in patients receiving long-term bisphosphonate therapy, the question of how long treatment should continue remains more art than science.
The field of anticoagulation has seen many recent advances. We have new heparins, new target-specific oral anticoagulants, and a lot of new science on the natural history of some thrombotic disorders and the efficacy and safety of these new agents. But how long to treat specific thrombotic conditions, which agent to use, how intense the anticoagulation needs to be, when to use bridging therapy, and, as discussed by Dr. Colantino et al, when to resume anticoagulation after a hemorrhagic event mostly remain part of the art of medicine.
I highlight the Colantino paper in the context of both clinical and editorial decision-making because it is an example of experienced clinical authors discussing their solutions to thorny clinical scenarios we often face with inadequate data. While some journals avoid this approach, we embrace the opportunity to provide thoughtful expert opinions to our readers. We push authors from the start of the editorial process and through aggressive peer review to provide evidence to support their practice recommendations when appropriate. But we also encourage them to make recommendations and describe their own decision-making process in situations that may not be fully described in the literature.
Most of our readers do not have ready access to consultants who have had years of experience within multidisciplinary teams at referral institutions regularly managing patients with permutations of these complex clinical problems. Though generic consultation advice must be evaluated within the context of the specific patient, we hope that by framing the clinical issues with relevant clinical science the opinions of experienced authors will be of use in guiding your (and my) approach to similar clinical scenarios.
If you think we are not striking the right balance between the science and the art of medical practice, please let me know.
Outcome measures need context
Dr. Vinay Prasad, in his commentary in this issue of CCJM, argues that, to best inform clinical decision-making, interventional and observational studies should measure multiple outcomes whenever possible, including all-cause mortality. He cites examples, such as calcium supplementation for bone health and aspirin for primary cardiovascular prevention, where favorable effects on focused clinical outcomes were not paralleled by favorable effects on overall morbidity. The study was a success, but the patient died.
Reading his commentary got me thinking about the many ways that the results of interventional studies and population data increasingly affect how we practice and teach medicine. Measuring an outcome in the population of interest (study volunteers, patient panels, trainees) is all the rage and is almost always more useful than only tracking interim metrics. True outcome measures are clearly useful when comparing groups and, hopefully, help assess the core reason the study was done.
Yet at the same time that group outcome measures are emphasized for many useful reasons, personalized medicine has a growing appeal: don’t let the individual get lost in the group, and pay attention to the outliers as well as the mean.
Positive results from a well-designed, prospective, controlled trial provide confidence that a drug or procedure has efficacy compared with placebo or a known effective comparator. But before recommending a therapy to a specific patient, we need to carefully evaluate whether the likely benefit in an individual patient is worth the clinical and financial cost. The information to make that evaluation doesn’t come easily from simply looking at a P value in a clinical study. Not only do we need to look at the size of the effect of an efficacious treatment and ask whether our specific patient is comparable to the study participants, but, as Dr. Prasad emphasizes, we must also look closely at the actual outcome measures of the study to see if they match our patient’s short- and long-term goals.
How significant is a statistically significant finding if the measured outcome is not the one the patient cares the most about? For example, a recent extremely well-done study that led to US Food and Drug Administration (FDA) approval of branded colchicine for acute gout used the efficacy measure of 50% reduction in pain at 24 hours.1 But what our patients really want is attack resolution (which usually requires medication in addition to what was used in the trial, increasing the risk of side effects). Proof of concept (a rational dose of colchicine has benefit) was very well demonstrated; that this dosing regimen should be standard of care, I think, remains unsupported.
We must also try to assess the long-term relevance (clinical outcome) of results based initially on surrogate markers. For example, not all drugs that increase bone density reduce the long-term fracture rate, and not all drugs that lower the blood glucose level reduce cardiovascular complications of diabetes. This has seemingly become a linchpin concept in the FDA’s approach to drug approval, with attendant increases in the cost and time to get drug approval.
We teach that the tools of evidence-based medicine should be routinely and appropriately employed in clinical practice. The premises of evidence-based medicine are deeply rooted in clinical studies. But our patients’ genetic background, individual preferences, and specific concerns regarding management of their disease and the side effects of medications should also be seriously discussed. We can then jointly define individualized outcome goals in the examination room. These may not exactly match the outcomes chosen by clinical investigators in designing their studies, and the plan may not match the policy of an insurance plan or a “pay-for-performance” metric. I hope that the opportunity for reconciliation of these differences will always be available.
The increasing demand for physicians and health systems to meet specific outcome and performance measures brings up the same concerns that arise when applying the results of a clinical study to a specific patient: will striving to match a group-based outcome be beneficial to the patient in front of us? My major goal as a physician is to care for the individual patient. My patient may not exactly match the population studied to prove that an intervention worked (or didn’t), so the data from that study may not fully apply. In the same way, care for all of our patients with the same diagnosis may not fit into the same performance rubric. The same attention that goes into determining appropriately relevant outcome measures for clinical studies needs to go into dictating performance outcome metrics by which physicians and health care systems are measured. They should be patient-centered and, to maintain face validity, somewhat flexible. On any given night, what keeps me awake is not population-based outcomes, but concern over the outcome of the individual patients I saw in clinic that day.
- Terkeltaub RA, Furst DE, Bennett K, Kook KA, Crockett RS, Davis MW. High versus low dosing of oral colchicine for early acute gout flare: twenty-four-hour outcome of the first multicenter, randomized, dou-ble-blind, placebo-controlled, parallel-group, dose-comparison colchicine study. Arthritis Rheum 2010; 62:1060–1068.
Dr. Vinay Prasad, in his commentary in this issue of CCJM, argues that, to best inform clinical decision-making, interventional and observational studies should measure multiple outcomes whenever possible, including all-cause mortality. He cites examples, such as calcium supplementation for bone health and aspirin for primary cardiovascular prevention, where favorable effects on focused clinical outcomes were not paralleled by favorable effects on overall morbidity. The study was a success, but the patient died.
Reading his commentary got me thinking about the many ways that the results of interventional studies and population data increasingly affect how we practice and teach medicine. Measuring an outcome in the population of interest (study volunteers, patient panels, trainees) is all the rage and is almost always more useful than only tracking interim metrics. True outcome measures are clearly useful when comparing groups and, hopefully, help assess the core reason the study was done.
Yet at the same time that group outcome measures are emphasized for many useful reasons, personalized medicine has a growing appeal: don’t let the individual get lost in the group, and pay attention to the outliers as well as the mean.
Positive results from a well-designed, prospective, controlled trial provide confidence that a drug or procedure has efficacy compared with placebo or a known effective comparator. But before recommending a therapy to a specific patient, we need to carefully evaluate whether the likely benefit in an individual patient is worth the clinical and financial cost. The information to make that evaluation doesn’t come easily from simply looking at a P value in a clinical study. Not only do we need to look at the size of the effect of an efficacious treatment and ask whether our specific patient is comparable to the study participants, but, as Dr. Prasad emphasizes, we must also look closely at the actual outcome measures of the study to see if they match our patient’s short- and long-term goals.
How significant is a statistically significant finding if the measured outcome is not the one the patient cares the most about? For example, a recent extremely well-done study that led to US Food and Drug Administration (FDA) approval of branded colchicine for acute gout used the efficacy measure of 50% reduction in pain at 24 hours.1 But what our patients really want is attack resolution (which usually requires medication in addition to what was used in the trial, increasing the risk of side effects). Proof of concept (a rational dose of colchicine has benefit) was very well demonstrated; that this dosing regimen should be standard of care, I think, remains unsupported.
We must also try to assess the long-term relevance (clinical outcome) of results based initially on surrogate markers. For example, not all drugs that increase bone density reduce the long-term fracture rate, and not all drugs that lower the blood glucose level reduce cardiovascular complications of diabetes. This has seemingly become a linchpin concept in the FDA’s approach to drug approval, with attendant increases in the cost and time to get drug approval.
We teach that the tools of evidence-based medicine should be routinely and appropriately employed in clinical practice. The premises of evidence-based medicine are deeply rooted in clinical studies. But our patients’ genetic background, individual preferences, and specific concerns regarding management of their disease and the side effects of medications should also be seriously discussed. We can then jointly define individualized outcome goals in the examination room. These may not exactly match the outcomes chosen by clinical investigators in designing their studies, and the plan may not match the policy of an insurance plan or a “pay-for-performance” metric. I hope that the opportunity for reconciliation of these differences will always be available.
The increasing demand for physicians and health systems to meet specific outcome and performance measures brings up the same concerns that arise when applying the results of a clinical study to a specific patient: will striving to match a group-based outcome be beneficial to the patient in front of us? My major goal as a physician is to care for the individual patient. My patient may not exactly match the population studied to prove that an intervention worked (or didn’t), so the data from that study may not fully apply. In the same way, care for all of our patients with the same diagnosis may not fit into the same performance rubric. The same attention that goes into determining appropriately relevant outcome measures for clinical studies needs to go into dictating performance outcome metrics by which physicians and health care systems are measured. They should be patient-centered and, to maintain face validity, somewhat flexible. On any given night, what keeps me awake is not population-based outcomes, but concern over the outcome of the individual patients I saw in clinic that day.
Dr. Vinay Prasad, in his commentary in this issue of CCJM, argues that, to best inform clinical decision-making, interventional and observational studies should measure multiple outcomes whenever possible, including all-cause mortality. He cites examples, such as calcium supplementation for bone health and aspirin for primary cardiovascular prevention, where favorable effects on focused clinical outcomes were not paralleled by favorable effects on overall morbidity. The study was a success, but the patient died.
Reading his commentary got me thinking about the many ways that the results of interventional studies and population data increasingly affect how we practice and teach medicine. Measuring an outcome in the population of interest (study volunteers, patient panels, trainees) is all the rage and is almost always more useful than only tracking interim metrics. True outcome measures are clearly useful when comparing groups and, hopefully, help assess the core reason the study was done.
Yet at the same time that group outcome measures are emphasized for many useful reasons, personalized medicine has a growing appeal: don’t let the individual get lost in the group, and pay attention to the outliers as well as the mean.
Positive results from a well-designed, prospective, controlled trial provide confidence that a drug or procedure has efficacy compared with placebo or a known effective comparator. But before recommending a therapy to a specific patient, we need to carefully evaluate whether the likely benefit in an individual patient is worth the clinical and financial cost. The information to make that evaluation doesn’t come easily from simply looking at a P value in a clinical study. Not only do we need to look at the size of the effect of an efficacious treatment and ask whether our specific patient is comparable to the study participants, but, as Dr. Prasad emphasizes, we must also look closely at the actual outcome measures of the study to see if they match our patient’s short- and long-term goals.
How significant is a statistically significant finding if the measured outcome is not the one the patient cares the most about? For example, a recent extremely well-done study that led to US Food and Drug Administration (FDA) approval of branded colchicine for acute gout used the efficacy measure of 50% reduction in pain at 24 hours.1 But what our patients really want is attack resolution (which usually requires medication in addition to what was used in the trial, increasing the risk of side effects). Proof of concept (a rational dose of colchicine has benefit) was very well demonstrated; that this dosing regimen should be standard of care, I think, remains unsupported.
We must also try to assess the long-term relevance (clinical outcome) of results based initially on surrogate markers. For example, not all drugs that increase bone density reduce the long-term fracture rate, and not all drugs that lower the blood glucose level reduce cardiovascular complications of diabetes. This has seemingly become a linchpin concept in the FDA’s approach to drug approval, with attendant increases in the cost and time to get drug approval.
We teach that the tools of evidence-based medicine should be routinely and appropriately employed in clinical practice. The premises of evidence-based medicine are deeply rooted in clinical studies. But our patients’ genetic background, individual preferences, and specific concerns regarding management of their disease and the side effects of medications should also be seriously discussed. We can then jointly define individualized outcome goals in the examination room. These may not exactly match the outcomes chosen by clinical investigators in designing their studies, and the plan may not match the policy of an insurance plan or a “pay-for-performance” metric. I hope that the opportunity for reconciliation of these differences will always be available.
The increasing demand for physicians and health systems to meet specific outcome and performance measures brings up the same concerns that arise when applying the results of a clinical study to a specific patient: will striving to match a group-based outcome be beneficial to the patient in front of us? My major goal as a physician is to care for the individual patient. My patient may not exactly match the population studied to prove that an intervention worked (or didn’t), so the data from that study may not fully apply. In the same way, care for all of our patients with the same diagnosis may not fit into the same performance rubric. The same attention that goes into determining appropriately relevant outcome measures for clinical studies needs to go into dictating performance outcome metrics by which physicians and health care systems are measured. They should be patient-centered and, to maintain face validity, somewhat flexible. On any given night, what keeps me awake is not population-based outcomes, but concern over the outcome of the individual patients I saw in clinic that day.
- Terkeltaub RA, Furst DE, Bennett K, Kook KA, Crockett RS, Davis MW. High versus low dosing of oral colchicine for early acute gout flare: twenty-four-hour outcome of the first multicenter, randomized, dou-ble-blind, placebo-controlled, parallel-group, dose-comparison colchicine study. Arthritis Rheum 2010; 62:1060–1068.
- Terkeltaub RA, Furst DE, Bennett K, Kook KA, Crockett RS, Davis MW. High versus low dosing of oral colchicine for early acute gout flare: twenty-four-hour outcome of the first multicenter, randomized, dou-ble-blind, placebo-controlled, parallel-group, dose-comparison colchicine study. Arthritis Rheum 2010; 62:1060–1068.
Diagnostic certainty and the eosinophil
This issue of the Journal contains an article by Dr. David A. Katzka, titled “The ‘skinny’ on eosinophilic esophagitis.” Reading it, I was struck by two messages, one clinical and one biological.
The clinical message relates to the psychology of diagnosis, or as Dr. Jerome Groopman discussed in his book How Doctors Think, misdiagnosis. In many patients, eosinophilic esophagitis, especially early in its course, can mimic gastroesophageal reflux disease (GERD), causing dysphagia and discomfort with eating that may be relieved at least in part with a proton pump inhibitor. When evaluating a patient who relates a history compatible with a common condition, we instinctively tend to embrace the diagnosis of that common syndrome, in this case GERD, rather than initially explore in depth the possibility of less-common mimics. Once the disease has progressed, with the patient experiencing frequent postprandial emesis or needing to dramatically limit the size of meals despite taking a full dose of a proton pump inhibitor, we will hopefully revisit and reassess our initial diagnosis, often with endoscopy and biopsy. But that may not always occur promptly, because we may have committed (per Groopman) an “anchoring error,” seizing on an initial symptom or finding, allowing it to cloud our clinical judgment, reaching “premature closure,” and not keeping our minds open to alternative diagnoses such as eosinophilic esophagitis. I wonder how many of the younger patients I have diagnosed with GERD who had histories of “food intolerances” actually had eosinophilic esophagitis.
The biological message is that the eosinophil is a fascinating and generally misunderstood cell, not just a marker and mediator of allergy. As an apparent defender against the macro-invaders—worms and other parasites—it carries an arsenal of defensive weapons. But eosinophil-dominant inflammatory reactions started by various molecular triggers and perpetuated by interleukin 5 and other promoters of eosinophil proliferation and chemotaxis have a common histopathologic footprint—fibrosis.
Long-standing significant asthma is characterized as much by airway remodeling and fibrosis as it is by bronchospasm. A myocardial hallmark of hypereosinophilic syndrome is fibrosis. Eosinophilic pneumonia can be followed by local scarring. Eosinophils have been implicated in the pathogenesis of primary biliary cirrhosis and the granulomatous cirrhosis of schistosomiasis. And as Dr. Katzka reminds us, the confluence of food hypersensitivity, gastric acid, and the products of eosinophil activation (likely including transforming growth factor beta) in the esophageal wall can result in a marked fibrotic reaction with dysmotility. It is unclear whether this is a dysregulated attempt at healing with resultant maladaptive “scar” formation, or perhaps a misdirected inflammatory response, with the goal of walling off a perceived invader (an allergen is not a worm).
There are probably many other mimic diseases that we are not recognizing often enough. And tissue eosinophils may portend detrimental fibrotic remodeling.
This issue of the Journal contains an article by Dr. David A. Katzka, titled “The ‘skinny’ on eosinophilic esophagitis.” Reading it, I was struck by two messages, one clinical and one biological.
The clinical message relates to the psychology of diagnosis, or as Dr. Jerome Groopman discussed in his book How Doctors Think, misdiagnosis. In many patients, eosinophilic esophagitis, especially early in its course, can mimic gastroesophageal reflux disease (GERD), causing dysphagia and discomfort with eating that may be relieved at least in part with a proton pump inhibitor. When evaluating a patient who relates a history compatible with a common condition, we instinctively tend to embrace the diagnosis of that common syndrome, in this case GERD, rather than initially explore in depth the possibility of less-common mimics. Once the disease has progressed, with the patient experiencing frequent postprandial emesis or needing to dramatically limit the size of meals despite taking a full dose of a proton pump inhibitor, we will hopefully revisit and reassess our initial diagnosis, often with endoscopy and biopsy. But that may not always occur promptly, because we may have committed (per Groopman) an “anchoring error,” seizing on an initial symptom or finding, allowing it to cloud our clinical judgment, reaching “premature closure,” and not keeping our minds open to alternative diagnoses such as eosinophilic esophagitis. I wonder how many of the younger patients I have diagnosed with GERD who had histories of “food intolerances” actually had eosinophilic esophagitis.
The biological message is that the eosinophil is a fascinating and generally misunderstood cell, not just a marker and mediator of allergy. As an apparent defender against the macro-invaders—worms and other parasites—it carries an arsenal of defensive weapons. But eosinophil-dominant inflammatory reactions started by various molecular triggers and perpetuated by interleukin 5 and other promoters of eosinophil proliferation and chemotaxis have a common histopathologic footprint—fibrosis.
Long-standing significant asthma is characterized as much by airway remodeling and fibrosis as it is by bronchospasm. A myocardial hallmark of hypereosinophilic syndrome is fibrosis. Eosinophilic pneumonia can be followed by local scarring. Eosinophils have been implicated in the pathogenesis of primary biliary cirrhosis and the granulomatous cirrhosis of schistosomiasis. And as Dr. Katzka reminds us, the confluence of food hypersensitivity, gastric acid, and the products of eosinophil activation (likely including transforming growth factor beta) in the esophageal wall can result in a marked fibrotic reaction with dysmotility. It is unclear whether this is a dysregulated attempt at healing with resultant maladaptive “scar” formation, or perhaps a misdirected inflammatory response, with the goal of walling off a perceived invader (an allergen is not a worm).
There are probably many other mimic diseases that we are not recognizing often enough. And tissue eosinophils may portend detrimental fibrotic remodeling.
This issue of the Journal contains an article by Dr. David A. Katzka, titled “The ‘skinny’ on eosinophilic esophagitis.” Reading it, I was struck by two messages, one clinical and one biological.
The clinical message relates to the psychology of diagnosis, or as Dr. Jerome Groopman discussed in his book How Doctors Think, misdiagnosis. In many patients, eosinophilic esophagitis, especially early in its course, can mimic gastroesophageal reflux disease (GERD), causing dysphagia and discomfort with eating that may be relieved at least in part with a proton pump inhibitor. When evaluating a patient who relates a history compatible with a common condition, we instinctively tend to embrace the diagnosis of that common syndrome, in this case GERD, rather than initially explore in depth the possibility of less-common mimics. Once the disease has progressed, with the patient experiencing frequent postprandial emesis or needing to dramatically limit the size of meals despite taking a full dose of a proton pump inhibitor, we will hopefully revisit and reassess our initial diagnosis, often with endoscopy and biopsy. But that may not always occur promptly, because we may have committed (per Groopman) an “anchoring error,” seizing on an initial symptom or finding, allowing it to cloud our clinical judgment, reaching “premature closure,” and not keeping our minds open to alternative diagnoses such as eosinophilic esophagitis. I wonder how many of the younger patients I have diagnosed with GERD who had histories of “food intolerances” actually had eosinophilic esophagitis.
The biological message is that the eosinophil is a fascinating and generally misunderstood cell, not just a marker and mediator of allergy. As an apparent defender against the macro-invaders—worms and other parasites—it carries an arsenal of defensive weapons. But eosinophil-dominant inflammatory reactions started by various molecular triggers and perpetuated by interleukin 5 and other promoters of eosinophil proliferation and chemotaxis have a common histopathologic footprint—fibrosis.
Long-standing significant asthma is characterized as much by airway remodeling and fibrosis as it is by bronchospasm. A myocardial hallmark of hypereosinophilic syndrome is fibrosis. Eosinophilic pneumonia can be followed by local scarring. Eosinophils have been implicated in the pathogenesis of primary biliary cirrhosis and the granulomatous cirrhosis of schistosomiasis. And as Dr. Katzka reminds us, the confluence of food hypersensitivity, gastric acid, and the products of eosinophil activation (likely including transforming growth factor beta) in the esophageal wall can result in a marked fibrotic reaction with dysmotility. It is unclear whether this is a dysregulated attempt at healing with resultant maladaptive “scar” formation, or perhaps a misdirected inflammatory response, with the goal of walling off a perceived invader (an allergen is not a worm).
There are probably many other mimic diseases that we are not recognizing often enough. And tissue eosinophils may portend detrimental fibrotic remodeling.
A new year and a new face for www.ccjm.org
Bob Dylan’s song “The Times They Are a-Changin’” was released in January 1964. As with many things Dylan, the song’s true intent is a bit unclear, but it remains one of the most invoked lyrical symbols of change 51 years later. In 2015, the Journal, planning to “heed the call,” is changing its online visage. I hope that our intent will not be viewed as unclear.
Our mission is unchanged: to provide our readers with free access to credible, relevant, readable information, and the opportunity to earn free CME credit. So why change the website? Innovations in digital publishing, the ability to offer a broader landscape of medical information—and the chance to more effectively solicit advertising to pay for it all—prompted us to collaborate with another publisher, Frontline Medical Communications.
Frontline describes itself as health care’s largest medical communications company and as a leader in digital, print, and live events. You likely have encountered their products, which include Internal Medicine News, Cardiology News, and Clinical Endocrinology News, and CME courses such as Perspectives in Rheumatic Diseases, which I codirect. Our collaboration will allow us to offer you links to new and, we hope, interesting material. For example, our online readers will have access to MD-IQ, a popular interactive self-test, as well as brief reports and timely commentaries from specialty scientific meetings.
But even though www.ccjm.org has a new look, everything on it remains open to all and free of charge. You will still have easy access to other educational and clinical information offered by Cleveland Clinic, including information about Clinic authors. At your first visit to our revamped site you will be asked to register, but your subsequent visits will be unencumbered except for a request to sign in using your e-mail address if you log in from a different device. The e-mail address is used for identification purposes only, as site sponsors want to know the (depersonalized) demographics of our readership. You will receive occasional e-mails with links to clinical content that may interest you. If you do not wish to receive these e-mails, just follow the instructions to opt out of them. Our goal is to be unobtrusive.
Our free CME process is the same. Each CME article includes a link to the Cleveland Clinic Center for Continuing Education site with instructions on how to complete the activity. Plus, the CME pull-down menu at the top of our home page will provide easy access to all currently active journal CME offerings. We hope the transition glitches will be few and the benefits many. And the option remains for you to read, download, and print our articles in PDF format, just as you always have.
As we start the new year, we at the Journal wish you our readers a happy, healthy, peaceful, and educational 2015.
Bob Dylan’s song “The Times They Are a-Changin’” was released in January 1964. As with many things Dylan, the song’s true intent is a bit unclear, but it remains one of the most invoked lyrical symbols of change 51 years later. In 2015, the Journal, planning to “heed the call,” is changing its online visage. I hope that our intent will not be viewed as unclear.
Our mission is unchanged: to provide our readers with free access to credible, relevant, readable information, and the opportunity to earn free CME credit. So why change the website? Innovations in digital publishing, the ability to offer a broader landscape of medical information—and the chance to more effectively solicit advertising to pay for it all—prompted us to collaborate with another publisher, Frontline Medical Communications.
Frontline describes itself as health care’s largest medical communications company and as a leader in digital, print, and live events. You likely have encountered their products, which include Internal Medicine News, Cardiology News, and Clinical Endocrinology News, and CME courses such as Perspectives in Rheumatic Diseases, which I codirect. Our collaboration will allow us to offer you links to new and, we hope, interesting material. For example, our online readers will have access to MD-IQ, a popular interactive self-test, as well as brief reports and timely commentaries from specialty scientific meetings.
But even though www.ccjm.org has a new look, everything on it remains open to all and free of charge. You will still have easy access to other educational and clinical information offered by Cleveland Clinic, including information about Clinic authors. At your first visit to our revamped site you will be asked to register, but your subsequent visits will be unencumbered except for a request to sign in using your e-mail address if you log in from a different device. The e-mail address is used for identification purposes only, as site sponsors want to know the (depersonalized) demographics of our readership. You will receive occasional e-mails with links to clinical content that may interest you. If you do not wish to receive these e-mails, just follow the instructions to opt out of them. Our goal is to be unobtrusive.
Our free CME process is the same. Each CME article includes a link to the Cleveland Clinic Center for Continuing Education site with instructions on how to complete the activity. Plus, the CME pull-down menu at the top of our home page will provide easy access to all currently active journal CME offerings. We hope the transition glitches will be few and the benefits many. And the option remains for you to read, download, and print our articles in PDF format, just as you always have.
As we start the new year, we at the Journal wish you our readers a happy, healthy, peaceful, and educational 2015.
Bob Dylan’s song “The Times They Are a-Changin’” was released in January 1964. As with many things Dylan, the song’s true intent is a bit unclear, but it remains one of the most invoked lyrical symbols of change 51 years later. In 2015, the Journal, planning to “heed the call,” is changing its online visage. I hope that our intent will not be viewed as unclear.
Our mission is unchanged: to provide our readers with free access to credible, relevant, readable information, and the opportunity to earn free CME credit. So why change the website? Innovations in digital publishing, the ability to offer a broader landscape of medical information—and the chance to more effectively solicit advertising to pay for it all—prompted us to collaborate with another publisher, Frontline Medical Communications.
Frontline describes itself as health care’s largest medical communications company and as a leader in digital, print, and live events. You likely have encountered their products, which include Internal Medicine News, Cardiology News, and Clinical Endocrinology News, and CME courses such as Perspectives in Rheumatic Diseases, which I codirect. Our collaboration will allow us to offer you links to new and, we hope, interesting material. For example, our online readers will have access to MD-IQ, a popular interactive self-test, as well as brief reports and timely commentaries from specialty scientific meetings.
But even though www.ccjm.org has a new look, everything on it remains open to all and free of charge. You will still have easy access to other educational and clinical information offered by Cleveland Clinic, including information about Clinic authors. At your first visit to our revamped site you will be asked to register, but your subsequent visits will be unencumbered except for a request to sign in using your e-mail address if you log in from a different device. The e-mail address is used for identification purposes only, as site sponsors want to know the (depersonalized) demographics of our readership. You will receive occasional e-mails with links to clinical content that may interest you. If you do not wish to receive these e-mails, just follow the instructions to opt out of them. Our goal is to be unobtrusive.
Our free CME process is the same. Each CME article includes a link to the Cleveland Clinic Center for Continuing Education site with instructions on how to complete the activity. Plus, the CME pull-down menu at the top of our home page will provide easy access to all currently active journal CME offerings. We hope the transition glitches will be few and the benefits many. And the option remains for you to read, download, and print our articles in PDF format, just as you always have.
As we start the new year, we at the Journal wish you our readers a happy, healthy, peaceful, and educational 2015.