User login
It’s simple. It’s obvious. None of us would like to be known as someone who orders diagnostic tests in a careless or stupid manner. And none of us order that way—just ask us! Yet, when critically evaluated, someone is ordering slews of unnecessary or inappropriate tests. In my own hospital we saved about $100,000 last year by putting “hard stops” on duplicated blood tests that were ordered too frequently to be of clinical value. This is an obvious and easily enacted intervention, but it is just the tip of the testing iceberg.
As technology advances, our testing practices must change. For example, the ventilation-perfusion nuclear scan is now seldom the test of choice when evaluating a patient with possible pulmonary embolism. However, it still has a role for experienced clinicians evaluating selected patients who have unexplained dyspnea or pulmonary hypertension. There is value in knowing the old as well as new testing modalities.
We like to think we practice evidence-based diagnostic testing. We talk about the gold-standard value of randomized controlled trials and using published data on pretest and posttest diagnostic likelihoods to assist us in choosing the appropriate test. However, the individual patient in front of us may have comorbidities that would have excluded her from the randomized trials. Who knows if my diagnostic acumen in determining the pretest likelihood of disease is better or worse than that of the clinicians who published the paper on the utility of that test? Sometimes choosing a test is not so simple.
Much of my clinical decision-making occurs in a gray zone of uncertainty. Rarely will a single test provide an indisputable diagnosis. So, I may bristle when someone, often for cost reasons, questions the necessity of a diagnostic test that I have ordered to help me understand a clinical problem in a specific patient.
Nevertheless, as Dr. Patrick Alguire points out in an editorial, the frequent use of sophisticated and expensive testing in the United States has not resulted in better clinical outcomes. And as Drs. Alraies and Buitrago et al discuss in letters to the editor, even relatively simple and minimally invasive tests can result in dire, unexpected outcomes. The choice of test matters to individual patients and to the health care system as a whole.
I do not minimize the financial impact of inappropriate testing, but in the clinic I am a doctor, not a businessman. I am far more swayed by clinical arguments than financial ones when making decisions for the patient on the examining table in front of me. Despite the general examples I provided above as to why regulated, cookbook approaches to test-ordering may lead to suboptimal care and physician and patient dissatisfaction (albeit while decreasing costs), sometimes ordering certain tests in certain circumstances just doesn’t make sense. Yet, there are many questionable test and scenario pairings that are ingrained in common practice. Some we learned during our training but have become less useful in light of new knowledge, some we may have adopted because of anecdotal experiences, and some are “demanded” by our patients. It is these that we hope to help expunge from routine clinical care.
In this issue of the Journal we are initiating a new series within our 1-Minute Consults, called Smart Testing. We are joining the efforts of the American College of Physicians (ACP) in educating physicians about reasons to avoid ordering frequently misused tests—tests that may add more harm, cost, or both than clinical utility to the care of our patients. The ACP also has an educational initiative called “High Value Care” that can be accessed (at no cost) at http://hvc.acponline.org/index.html. We at the Journal are very pleased to be working with physicians at the ACP to offer you this peer-reviewed series of patient vignettes that will focus, in an evidence-based and common-sense way, on the clinical value of selected tests in specific scenarios. Next month we will also be presenting a commentary on the impact that “defensive medicine” plays in test ordering and malpractice case decisions.
The tests and scenarios to be presented are chosen in clinician group discussions. Some of the tests have also been identified by specialty societies as providing limited value to patients. In selecting the topics, we pick common scenarios, realizing that there can often (always?) be some situational nuance that negates the accompanying discussion. We are not expecting to throw light on those nuanced zones of uncertainty, but we do hope to change test-ordering behaviors in situations in which there is a smart—and a not-so-smart—way to pursue a diagnosis.
It’s simple. It’s obvious. None of us would like to be known as someone who orders diagnostic tests in a careless or stupid manner. And none of us order that way—just ask us! Yet, when critically evaluated, someone is ordering slews of unnecessary or inappropriate tests. In my own hospital we saved about $100,000 last year by putting “hard stops” on duplicated blood tests that were ordered too frequently to be of clinical value. This is an obvious and easily enacted intervention, but it is just the tip of the testing iceberg.
As technology advances, our testing practices must change. For example, the ventilation-perfusion nuclear scan is now seldom the test of choice when evaluating a patient with possible pulmonary embolism. However, it still has a role for experienced clinicians evaluating selected patients who have unexplained dyspnea or pulmonary hypertension. There is value in knowing the old as well as new testing modalities.
We like to think we practice evidence-based diagnostic testing. We talk about the gold-standard value of randomized controlled trials and using published data on pretest and posttest diagnostic likelihoods to assist us in choosing the appropriate test. However, the individual patient in front of us may have comorbidities that would have excluded her from the randomized trials. Who knows if my diagnostic acumen in determining the pretest likelihood of disease is better or worse than that of the clinicians who published the paper on the utility of that test? Sometimes choosing a test is not so simple.
Much of my clinical decision-making occurs in a gray zone of uncertainty. Rarely will a single test provide an indisputable diagnosis. So, I may bristle when someone, often for cost reasons, questions the necessity of a diagnostic test that I have ordered to help me understand a clinical problem in a specific patient.
Nevertheless, as Dr. Patrick Alguire points out in an editorial, the frequent use of sophisticated and expensive testing in the United States has not resulted in better clinical outcomes. And as Drs. Alraies and Buitrago et al discuss in letters to the editor, even relatively simple and minimally invasive tests can result in dire, unexpected outcomes. The choice of test matters to individual patients and to the health care system as a whole.
I do not minimize the financial impact of inappropriate testing, but in the clinic I am a doctor, not a businessman. I am far more swayed by clinical arguments than financial ones when making decisions for the patient on the examining table in front of me. Despite the general examples I provided above as to why regulated, cookbook approaches to test-ordering may lead to suboptimal care and physician and patient dissatisfaction (albeit while decreasing costs), sometimes ordering certain tests in certain circumstances just doesn’t make sense. Yet, there are many questionable test and scenario pairings that are ingrained in common practice. Some we learned during our training but have become less useful in light of new knowledge, some we may have adopted because of anecdotal experiences, and some are “demanded” by our patients. It is these that we hope to help expunge from routine clinical care.
In this issue of the Journal we are initiating a new series within our 1-Minute Consults, called Smart Testing. We are joining the efforts of the American College of Physicians (ACP) in educating physicians about reasons to avoid ordering frequently misused tests—tests that may add more harm, cost, or both than clinical utility to the care of our patients. The ACP also has an educational initiative called “High Value Care” that can be accessed (at no cost) at http://hvc.acponline.org/index.html. We at the Journal are very pleased to be working with physicians at the ACP to offer you this peer-reviewed series of patient vignettes that will focus, in an evidence-based and common-sense way, on the clinical value of selected tests in specific scenarios. Next month we will also be presenting a commentary on the impact that “defensive medicine” plays in test ordering and malpractice case decisions.
The tests and scenarios to be presented are chosen in clinician group discussions. Some of the tests have also been identified by specialty societies as providing limited value to patients. In selecting the topics, we pick common scenarios, realizing that there can often (always?) be some situational nuance that negates the accompanying discussion. We are not expecting to throw light on those nuanced zones of uncertainty, but we do hope to change test-ordering behaviors in situations in which there is a smart—and a not-so-smart—way to pursue a diagnosis.
It’s simple. It’s obvious. None of us would like to be known as someone who orders diagnostic tests in a careless or stupid manner. And none of us order that way—just ask us! Yet, when critically evaluated, someone is ordering slews of unnecessary or inappropriate tests. In my own hospital we saved about $100,000 last year by putting “hard stops” on duplicated blood tests that were ordered too frequently to be of clinical value. This is an obvious and easily enacted intervention, but it is just the tip of the testing iceberg.
As technology advances, our testing practices must change. For example, the ventilation-perfusion nuclear scan is now seldom the test of choice when evaluating a patient with possible pulmonary embolism. However, it still has a role for experienced clinicians evaluating selected patients who have unexplained dyspnea or pulmonary hypertension. There is value in knowing the old as well as new testing modalities.
We like to think we practice evidence-based diagnostic testing. We talk about the gold-standard value of randomized controlled trials and using published data on pretest and posttest diagnostic likelihoods to assist us in choosing the appropriate test. However, the individual patient in front of us may have comorbidities that would have excluded her from the randomized trials. Who knows if my diagnostic acumen in determining the pretest likelihood of disease is better or worse than that of the clinicians who published the paper on the utility of that test? Sometimes choosing a test is not so simple.
Much of my clinical decision-making occurs in a gray zone of uncertainty. Rarely will a single test provide an indisputable diagnosis. So, I may bristle when someone, often for cost reasons, questions the necessity of a diagnostic test that I have ordered to help me understand a clinical problem in a specific patient.
Nevertheless, as Dr. Patrick Alguire points out in an editorial, the frequent use of sophisticated and expensive testing in the United States has not resulted in better clinical outcomes. And as Drs. Alraies and Buitrago et al discuss in letters to the editor, even relatively simple and minimally invasive tests can result in dire, unexpected outcomes. The choice of test matters to individual patients and to the health care system as a whole.
I do not minimize the financial impact of inappropriate testing, but in the clinic I am a doctor, not a businessman. I am far more swayed by clinical arguments than financial ones when making decisions for the patient on the examining table in front of me. Despite the general examples I provided above as to why regulated, cookbook approaches to test-ordering may lead to suboptimal care and physician and patient dissatisfaction (albeit while decreasing costs), sometimes ordering certain tests in certain circumstances just doesn’t make sense. Yet, there are many questionable test and scenario pairings that are ingrained in common practice. Some we learned during our training but have become less useful in light of new knowledge, some we may have adopted because of anecdotal experiences, and some are “demanded” by our patients. It is these that we hope to help expunge from routine clinical care.
In this issue of the Journal we are initiating a new series within our 1-Minute Consults, called Smart Testing. We are joining the efforts of the American College of Physicians (ACP) in educating physicians about reasons to avoid ordering frequently misused tests—tests that may add more harm, cost, or both than clinical utility to the care of our patients. The ACP also has an educational initiative called “High Value Care” that can be accessed (at no cost) at http://hvc.acponline.org/index.html. We at the Journal are very pleased to be working with physicians at the ACP to offer you this peer-reviewed series of patient vignettes that will focus, in an evidence-based and common-sense way, on the clinical value of selected tests in specific scenarios. Next month we will also be presenting a commentary on the impact that “defensive medicine” plays in test ordering and malpractice case decisions.
The tests and scenarios to be presented are chosen in clinician group discussions. Some of the tests have also been identified by specialty societies as providing limited value to patients. In selecting the topics, we pick common scenarios, realizing that there can often (always?) be some situational nuance that negates the accompanying discussion. We are not expecting to throw light on those nuanced zones of uncertainty, but we do hope to change test-ordering behaviors in situations in which there is a smart—and a not-so-smart—way to pursue a diagnosis.