But how many people died? Health outcomes in perspective

Article Type
Changed
Thu, 08/17/2017 - 13:18
Display Headline
But how many people died? Health outcomes in perspective

Before we dispense advice about staying healthy, we should know the effect of whatever we are recommending—be it diet, supplements, chemoprevention, or screening—on all meaningful outcomes, including overall mortality, quality of life, harms, inconveniences, and cost. Even though looking at all these outcomes may seem self-evidently wise, many research studies do not do it, and health care providers do not do it enough.

How would looking at all the outcomes change our opinion of health practices?

COMPARING GRAPEFRUIT AND PEACHES

A 2013 study linked eating berries with lower rates of myocardial infarction in women,1 another found that people who ate some fruits (blackberries and grapefruit) but not others (peaches and oranges) had a lower rate of incident diabetes,2 and a third linked a healthy diet to a lower incidence of pancreatic cancer.3 However, none of these studies examined all-cause mortality rates. A fourth study found that drinking green tea was associated with a lower risk of death from pneumonia in Japanese women, but not men.4

For the sake of argument, let us put aside concern about whether observational studies can reliably inform recommendations for clinical practice5 and concede that they can. The point is that studies such as those above look at some but not all meaningful outcomes, undermining the utility of their findings. If healthy people conclude that they should eat grapefruit instead of peaches, they may miss out on benefits of peaches that the study did not examine. Eating a healthy diet remains prudent, but the study linking it to a lower rate of pancreatic cancer is no tipping point, as pancreatic cancer is just one way to die. And advocating green tea to Japanese women but not men, to avoid pneumonia, would be a questionable public health strategy. Pneumonia is the sixth leading cause of death and accounts for 3.9% of disability-adjusted life-years lost,6 but what about the first five causes, which account for 96.1%?

We should know the effect of what we recommend on all meaningful outcomes

These and many other studies of dietary habits of people who are well fail to consider end points that healthy people care about. Suppose that drinking more coffee would prevent all deaths from pancreatic cancer but would modestly increase cardiovascular deaths—say, by 5%. On a population level, recommending more coffee would be wrong, because it would result in far more deaths. Suppose that drinking tea decreased deaths from pneumonia—we should still not advise patients to drink tea, as we do not know whether tea’s net effect is beneficial.

Some may argue that these epidemiologic studies are merely hypothesis-generating, but my colleagues and I analyzed all the nonrandomized studies published in several leading medical journals in 1 year and found that 59% made specific practice recommendations.5 Other studies may be misused in this fashion, even though the authors refrained from doing so.

CALCIUM PROTECTS BONES, BUT WHAT ABOUT THE HEART?

Narrow end points are not limited to dietary studies. Calcium supplementation with or without vitamin D has been vigorously promoted for decades7 to treat and prevent osteoporosis in pre- and postmenopausal women, and data confirm that these agents decrease the risk of fracture.8

But bone health is only one end point important to women, and long-term supplementation of a mineral or vitamin with the goal of strengthening bones may have unforeseen adverse effects.

In 2010, calcium supplementation without vitamin D was linked to higher rates of myocardial infarction (with some suggestion of increased rates of all-cause death) in pooled analyses of 15 trials.9 In 2011, a higher risk of cardiovascular events (stroke and myocardial infarction) was found in recipients of calcium with vitamin D in a reanalysis of the Women’s Health Initiative Calcium/Vitamin D Supplementation Study,10 adjusting for the widespread use of these supplements at baseline, and this was corroborated by a meta-analysis of eight other studies.10 A more recent study confirmed that supplemental calcium increases cardiovascular risk in men.11

Although the increase in cardiovascular risk seems to be modest, millions of people take calcium supplements; thus, many people may be harmed. Our exuberance for bone health suggests that, at times, a single outcome can distract.

DOES SCREENING IMPROVE SURVIVAL?

On the whole, the evidence for screening continues to focus only on certain outcomes. With the exception of the National Lung Cancer Screening Trial,12 to date, no cancer screening trial has shown an improvement in the overall survival rate.

In fact, a 2013 Cochrane review13 found that mammographic screening failed to lower the rate of death from all cancers, including breast cancer, after 10 years (relative risk [RR] 1.02, 95% confidence interval [CI] 0.95–1.10) and the rate of death from all causes after 13 years (RR 0.99, 95% CI 0.95–1.03). Although screening lowered the breast cancer mortality rate, the authors argued that we should not look at only some outcomes and concluded that “breast cancer mortality was an unreliable outcome” that was biased in favor of screening, mainly because of “differential misclassification of cause of death.”13

Significance-chasing and selective reporting are common in observational studies

Black et al14 found that of 12 major cancer screening trials examining both disease-specific mortality and all-cause mortality, 5 had differences in mortality rates that went in opposite directions (eg, the rate of disease-specific mortality improved while overall survival was harmed, or vice-versa), suggesting paradoxical effects. In another 2 studies, differences in all-cause mortality exceeded gains in disease-specific mortality. Thus, in 7 (58%) of the 12 trials, inconsistencies existed between rates of disease-specific mortality and all-cause mortality, prompting doubt about the conclusions of the studies.14

For some cancers, data suggest that screening increases deaths from other causes, and these extra deaths are not included in the data on disease-specific mortality. For instance, men who are screened for prostate cancer have higher rates of death from cardiovascular disease and suicide,15 which might negate the tenuous benefits of screening in terms of deaths from prostate cancer.

Studies of screening for diseases other than cancer have also focused on only some outcomes. For example, the United States Preventive Services Task Force supports screening for abdominal aortic aneurysm once with ultrasonography in men ages 65 to 75 who have ever smoked,16 but the recommendation is based on improvements in the death rate from abdominal aortic aneurysm, not in all-cause mortality.17 This, along with a declining incidence of this disease and changes in how it is treated (with endovascular repair on the rise and open surgical repair declining), has led some to question if we should continue to screen for it.18

CHEMOPREVENTION: NO FREE LUNCH

Finasteride

In 2013, an analysis19 that looked at all of the outcomes laid to rest 10 years of debate over the Prostate Cancer Prevention Trial, which had randomized more than 18,000 healthy men over age 55 with no signs or symptoms of prostate cancer to receive finasteride or placebo, with the end point of prostate cancer incidence. The initial results, published in 2003,20 had found that the drug decreased the rate of incident prostate cancer but paradoxically increased the rate of high-grade (Gleason score ≥ 7) tumors. Whether these results were real or an artifact of ascertainment was debated, as was whether the adverse effects—decreases in sexual potency, libido, and ejaculation—were worth the 25% reduction in prostate cancer incidence.

Much of the debate ended with the 2013 publication, which showed that regardless of finasteride’s effect on prostate cancer, overall mortality curves at 18-year follow-up were absolutely indistinguishable.19 Healthy patients hoping that finasteride will help them live longer or better can be safely told that it does neither.

Statins as primary prevention

As for statin therapy as primary prevention, the best meta-analysis to date (which meticulously excluded secondary-prevention patients after analyzing individual patient-level data) found no improvement in overall mortality despite more than 240,000 patient-years of follow-up.21 Because of this, and because the harms of statin therapy are being increasingly (but still poorly) documented, widespread use of statins has been questioned.22

Proponents point to the ability of statins to reduce end points such as revascularization, stroke, and nonfatal myocardial infarction.23 But the main question facing healthy users is whether improvement in these end points translates to longer life or better quality of life. These questions remain unresolved.

Aspirin as primary prevention

Another example of the importance of considering all the outcomes is the issue of aspirin as primary prevention.

Enthusiasm for aspirin as primary prevention has been recently reinvigorated, with data showing it can prevent colorectal cancers that overexpress cyclooxygenase-2.24 But a meta-analysis of nine randomized trials of aspirin25 with more than 1,000 participants found that, although aspirin decreases the rate of nonfatal myocardial infarction (odds ratio [OR] 0.80, 95% CI 0.67–0.96), it does not significantly reduce cancer mortality (OR 0.93, 95% CI 0.84–1.03), and it increases the risk of nontrivial bleeding (OR 1.31; 95% CI 1.14–1.50). Its effects on overall mortality were not statistically significant but were possibly favorable (OR 0.94, 95% CI 0.88–1.00), so this requires further study.

After broad consideration of the risks and benefits of aspirin, the US Food and Drug Administration has issued a statement that aspirin is not recommended as primary prevention.26

 

 

WHY STUDIES LOOK ONLY AT SOME OUTCOMES

There are many reasons why researchers favor examining some outcomes over others, but there is no clear justification for ignoring overall mortality. Overall mortality should routinely be examined in large population studies of diet and supplements and in trials of medications27 and cancer screening.

Healthy people do not care about some outcomes; they care about all outcomes

With regard to large observational studies, it is hard to understand why some would not include survival analyses, unless the results would fail to support the study’s hypothesis. In fact, some studies do report overall survival results,28 but others do not. The omission of overall survival in large data-set research should raise concerns of multiple hypothesis testing and selective reporting. Eating peaches as opposed to grapefruit may not be associated with differences in rates of all-cause mortality, myocardial infarction, pneumonia, or lung cancer, but if you look at 20 different variables, chances are that one will have a P value less than .05, and an investigator might be tempted to report it as statistically significant and even meaningful.

Empirical studies support this claim. One group found that for 80% of ingredients randomly selected from a cookbook, there existed Medline-indexed articles assessing cancer risk, with 65% of studies finding nominally significant differences in the risk of some type of cancer.29

An excess of significant findings such as this argues that significance-chasing and selective reporting are common in this field and has led to calls for methodologic improvements, including routine falsification testing30 and up-front registration of observational studies.31

WHY ALL OUTCOMES MATTER

Healthy people do not care about some outcomes; they care about all outcomes. Some patients may truly have unique priorities (quality of life vs quantity of life), but others may overestimate their risk of death from some causes and underestimate their risk from others, and practitioners have the obligation to counsel them appropriately.

For instance, a patient who watches a brother pass away from pancreatic adenocarcinoma may wish to do everything possible to avoid that illness. But often, as in this case, fear may surpass risk. The patient’s risk of pancreatic cancer is no different than that in the general population: the best data show32 an odds ratio of 1.8, with a confidence interval spanning 1. As such, pancreatic cancer is still not among his five most likely causes of death.

Some patients may care about their bone mineral density or cholesterol level. But again, physicians have an obligation to direct patients’ attention to all of the outcomes that should be of interest to them.

OBJECTIONS TO INCLUDING ALL OUTCOMES

There are important objections to the argument I am presenting here.

First, including all outcomes is expensive. For studies involving retrospective analysis of existing data, looking at overall mortality would not incur additional costs, only an additional analysis. But for prospective trials to have statistical power to detect a difference in overall mortality, larger sample sizes or longer follow-up might be needed—either of which would add to the cost.

In chemoprevention trials, the rate of incident cancer has been called the gold standard end point.33 To design a thrifty chemoprevention study, investigators can either target a broad population and aim for incident malignancy, or target a restricted, high-risk population and aim for overall mortality. The latter is preferable because although it can inform the decisions of only some people, the former cannot inform any people, as was seen with difficulties in interpreting the Prostate Cancer Prevention Trial and trials showing reduced breast cancer incidence from tamoxifen, raloxifene, and exemestane.

In large cancer screening trials, the cost of powering the trial for overall mortality would be greater, and though a carefully selected, high-risk population can be enrolled, historically this has not been popular. In cancer screening, it is a mistake to contrast the costs of trials powered for overall mortality with those of lesser studies examining disease-specific death. Instead, we must consider the larger societal costs incurred by cancer screening that does not truly improve quantity or quality of life.34

The recent reversal of recommendations for prostate-specific antigen testing by the United States Preventive Services Task Force35 suggests that erroneous recommendations, practiced for decades, can cost society hundreds of billions of dollars but fail to improve meaningful outcomes.

The history of medicine is replete with examples of widely recommended practices and interventions that not only failed to improve the outcomes they claimed to improve, but at times increased the rate of all-cause mortality or carried harms that far outweighed benefits.36,37 The costs of conducting research to fully understand all outcomes are only a fraction of the costs of a practice that is widely disseminated.38

The history of medicine is replete with practices that harmed more than helped

A second objection to my analysis is that there is more to life than survival, and outcomes besides overall mortality are important. This is a self-evident truth. That an intervention improves the rate of overall mortality is neither necessary nor sufficient for its recommendation. Practices may improve survival but worsen quality of life to such a degree that they should not be recommended. Conversely, practices that improve quality of life should be endorsed even if they fail to prolong life.

Thus, overall mortality and quality of life must be considered together, but the end points that are favored currently (disease-specific death, incident cancer, diabetes mellitus, myocardial infarction) do not do a good job of capturing either. Disease-specific death is not meaningful to any patient if deaths from other causes are increased so that overall mortality is unchanged. Furthermore, preventing a diagnosis of cancer or diabetes may offer some psychological comfort, but well-crafted quality-of-life instruments are best suited to capture just how great that benefit is and whether it justifies the cost of such interventions, particularly if the rate of survival is unchanged.

Preventing stroke or myocardial infarction is important, but we should be cautious of interpreting data when decreasing the rate of these morbid events does not lead to commensurate improvements in survival. Alternatively, if morbid events are truly avoided but survival analyses are underpowered, quality-of-life measurements should demonstrate the benefit. But the end points currently used capture neither survival nor quality of life in a meaningful way.

WHEN ADVISING HEALTHY PEOPLE

Looking at all outcomes is important when caring for patients who are sick, but even more so for patients who are well. We need to know an intervention has a net benefit before we recommend it to a healthy person. Overall mortality should be reported routinely in this population, particularly in settings where the cost to do so is trivial (ie, in observational studies). Designers of thrifty trials should try to include people at high risk and power the trial for definite end points, rather than being broadly inclusive and reaching disputed conclusions. Research and decision-making should look at all outcomes. Healthy people deserve no less.

References
  1. Cassidy A, Mukamal KJ, Liu L, Franz M, Eliassen AH, Rimm EB. High anthocyanin intake is associated with a reduced risk of myocardial infarction in young and middle-aged women. Circulation 2013; 127:188–196.
  2. Muraki I, Imamura F, Manson JE, et al. Fruit consumption and risk of type 2 diabetes: results from three prospective longitudinal cohort studies. BMJ 2013; 347:f5001.
  3. Arem H, Reedy J, Sampson J, et al. The Healthy Eating Index 2005 and risk for pancreatic cancer in the NIH-AARP study. J Natl Cancer Inst 2013; 105:1298–1305.
  4. Watanabe I, Kuriyama S, Kakizaki M, et al. Green tea and death from pneumonia in Japan: the Ohsaki cohort study. Am J Clin Nutr 2009; 90:672–679.
  5. Prasad V, Jorgenson J, Ioannidis JP, Cifu A. Observational studies often make clinical practice recommendations: an empirical evaluation of authors’ attitudes. J Clin Epidemiol 2013; 66:361–366.e4.
  6. Murray CJ, Vos T, Lozano R, et al. Disability-adjusted life years (DALYs) for 291 diseases and injuries in 21 regions, 1990-2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012; 380:2197–2223.
  7. Eastell R. Treatment of postmenopausal osteoporosis. N Engl J Med 1998; 338:736–746.
  8. Looker AC. Interaction of science, consumer practices and policy: calcium and bone health as a case study. J Nutr 2003; 133:1987S–1991S.
  9. Bolland MJ, Avenell A, Baron JA, et al. Effect of calcium supplements on risk of myocardial infarction and cardiovascular events: meta-analysis. BMJ 2010; 341:c3691
  10. Bolland MJ, Grey A, Avenell A, Gamble GD, Reid IR. Calcium supplements with or without vitamin D and risk of cardiovascular events: reanalysis of the Women’s Health Initiative limited access dataset and meta-analysis. BMJ 2011; 342:d2040.
  11. Xiao Q, Murphy RA, Houston DK, Harris TB, Chow WH, Park Y. Dietary and supplemental calcium intake and cardiovascular disease mortality: the National Institutes of Health-AARP diet and health study. JAMA Intern Med 2013; 173:639–646.
  12. The National Lung Screening Trial Research Team. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med 2011; 365:395–409.
  13. Gøtzsche PC, Jørgensen KJ. Screening for breast cancer with mammography. Cochrane Database Syst Rev 2013 Jun 4;6:CD001877.
  14. Black WC, Haggstrom DA, Welch HG. All-cause mortality in randomized trials of cancer screening. J Natl Cancer Inst 2002; 94:167–173.
  15. Fall K, Fang F, Mucci LA, et al. Immediate risk for cardiovascular events and suicide following a prostate cancer diagnosis: prospective cohort study. PLoS Med 2009; 6:e1000197.
  16. Prasad V. An unmeasured harm of screening. Arch Intern Med 2012; 172:1442–1443.
  17. Guirguis-Blake JM, Beil TL, Senger CA, Whitlock EP. Ultrasonography screening for abdominal aortic aneurysms: a sytematic evidence review for the U.S. Preventive Services Task Force. Ann Intern Med 2014; 160:321–329.
  18. Harris R, Sheridan S, Kinsinger L. Time to rethink screening for abdominal aortic aneurysm? Arch Intern Med 2012; 172:1462–1463.
  19. Thompson IM Jr, Goodman PJ, Tangen CM, et al. Long-term survival of participants in the prostate cancer prevention trial. N Engl J Med 2013; 369:603–610.
  20. Thompson IM, Goodman PJ, Tangen CM, et al. The influence of finasteride on the development of prostate cancer. N Engl J Med 2003; 349:216–224.
  21. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:1024–1031.
  22. Redberg RF, Katz MH. Healthy men should not take statins. JAMA 2012; 307:1491–1492.
  23. McEvoy JW, Blumenthal RS, Blaha MJ. Statin therapy for hyperlipidemia. JAMA 2013; 310:1184–1185.
  24. Chan AT, Ogino S, Fuchs CS. Aspirin and the risk of colorectal cancer in relation to the expression of COX-2. N Engl J Med 2007; 356:2131–2142.
  25. Seshasai SR, Wijesuriya S, Sivakumaran R, et al. Effect of aspirin on vascular and nonvascular outcomes: meta-analysis of randomized controlled trials. Arch Intern Med 2012; 172:209–216.
  26. US Food and Drug Administration. Use of aspirin for primary prevention of heart attack and stroke. www.fda.gov/Drugs/ResourcesForYou/Consumers/ucm390574.htm. Accessed February 5, 2015.
  27. Ioannidis JP. Mega-trials for blockbusters. JAMA 2013; 309:239–240.
  28. Dunkler D, Dehghan M, Teo KK, et al; ONTARGET Investigators. Diet and kidney disease in high-risk individuals with type 2 diabetes mellitus. JAMA Intern Med 2013; 173:1682–1692.
  29. Schoenfeld JD, Ioannidis JP. Is everything we eat associated with cancer? A systematic cookbook review. Am J Clin Nutr 2013; 97:127–134.
  30. Prasad V, Jena AB. Prespecified falsification end points: can they validate true observational associations? JAMA 2013; 309:241–242.
  31. Ioannidis JPA. The importance of potential studies that have not existed and registration of observational data sets. JAMA 2012; 308:575–576.
  32. Klein AP, Brune KA, Petersen GM, et al. Prospective risk of pancreatic cancer in familial pancreatic cancer kindreds. Cancer Res 2004; 64:2634–2638.
  33. William WN Jr, Papadimitrakopoulou VA. Optimizing biomarkers and endpoints in oral cancer chemoprevention trials. Cancer Prev Res (Phila) 2013; 6:375–378.
  34. Prasad V. Powering cancer screening for overall mortality. Ecancermedicalscience 2013 Oct 9; 7:ed27.
  35. US Preventive Services Task Force. Final recommendation statement. Prostate cancer: screening. http://www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/prostate-cancer-screening. Accessed February 5, 2015.
  36. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:37–38.
  37. Prasad V, Vandross A, Toomey C, et al. A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin Proc 2013; 88:790–798.
  38. Elshaug AG, Garber AM. How CER could pay for itself—insights from vertebral fracture treatments. N Engl J Med 2011; 364:1390–1393.
Article PDF
Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, National Cancer Institute, National Institutes of Health, 10 Center Drive, 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

Issue
Cleveland Clinic Journal of Medicine - 82(3)
Publications
Topics
Page Number
146-150
Legacy Keywords
clinical trials, grapefruit, peaches, calcium, fractures, mammography, breast cancer, prostate cancer, statins, aspirin, significance-chasing, Vinay Prasad
Sections
Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, National Cancer Institute, National Institutes of Health, 10 Center Drive, 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, National Cancer Institute, National Institutes of Health, 10 Center Drive, 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

Article PDF
Article PDF
Related Articles

Before we dispense advice about staying healthy, we should know the effect of whatever we are recommending—be it diet, supplements, chemoprevention, or screening—on all meaningful outcomes, including overall mortality, quality of life, harms, inconveniences, and cost. Even though looking at all these outcomes may seem self-evidently wise, many research studies do not do it, and health care providers do not do it enough.

How would looking at all the outcomes change our opinion of health practices?

COMPARING GRAPEFRUIT AND PEACHES

A 2013 study linked eating berries with lower rates of myocardial infarction in women,1 another found that people who ate some fruits (blackberries and grapefruit) but not others (peaches and oranges) had a lower rate of incident diabetes,2 and a third linked a healthy diet to a lower incidence of pancreatic cancer.3 However, none of these studies examined all-cause mortality rates. A fourth study found that drinking green tea was associated with a lower risk of death from pneumonia in Japanese women, but not men.4

For the sake of argument, let us put aside concern about whether observational studies can reliably inform recommendations for clinical practice5 and concede that they can. The point is that studies such as those above look at some but not all meaningful outcomes, undermining the utility of their findings. If healthy people conclude that they should eat grapefruit instead of peaches, they may miss out on benefits of peaches that the study did not examine. Eating a healthy diet remains prudent, but the study linking it to a lower rate of pancreatic cancer is no tipping point, as pancreatic cancer is just one way to die. And advocating green tea to Japanese women but not men, to avoid pneumonia, would be a questionable public health strategy. Pneumonia is the sixth leading cause of death and accounts for 3.9% of disability-adjusted life-years lost,6 but what about the first five causes, which account for 96.1%?

We should know the effect of what we recommend on all meaningful outcomes

These and many other studies of dietary habits of people who are well fail to consider end points that healthy people care about. Suppose that drinking more coffee would prevent all deaths from pancreatic cancer but would modestly increase cardiovascular deaths—say, by 5%. On a population level, recommending more coffee would be wrong, because it would result in far more deaths. Suppose that drinking tea decreased deaths from pneumonia—we should still not advise patients to drink tea, as we do not know whether tea’s net effect is beneficial.

Some may argue that these epidemiologic studies are merely hypothesis-generating, but my colleagues and I analyzed all the nonrandomized studies published in several leading medical journals in 1 year and found that 59% made specific practice recommendations.5 Other studies may be misused in this fashion, even though the authors refrained from doing so.

CALCIUM PROTECTS BONES, BUT WHAT ABOUT THE HEART?

Narrow end points are not limited to dietary studies. Calcium supplementation with or without vitamin D has been vigorously promoted for decades7 to treat and prevent osteoporosis in pre- and postmenopausal women, and data confirm that these agents decrease the risk of fracture.8

But bone health is only one end point important to women, and long-term supplementation of a mineral or vitamin with the goal of strengthening bones may have unforeseen adverse effects.

In 2010, calcium supplementation without vitamin D was linked to higher rates of myocardial infarction (with some suggestion of increased rates of all-cause death) in pooled analyses of 15 trials.9 In 2011, a higher risk of cardiovascular events (stroke and myocardial infarction) was found in recipients of calcium with vitamin D in a reanalysis of the Women’s Health Initiative Calcium/Vitamin D Supplementation Study,10 adjusting for the widespread use of these supplements at baseline, and this was corroborated by a meta-analysis of eight other studies.10 A more recent study confirmed that supplemental calcium increases cardiovascular risk in men.11

Although the increase in cardiovascular risk seems to be modest, millions of people take calcium supplements; thus, many people may be harmed. Our exuberance for bone health suggests that, at times, a single outcome can distract.

DOES SCREENING IMPROVE SURVIVAL?

On the whole, the evidence for screening continues to focus only on certain outcomes. With the exception of the National Lung Cancer Screening Trial,12 to date, no cancer screening trial has shown an improvement in the overall survival rate.

In fact, a 2013 Cochrane review13 found that mammographic screening failed to lower the rate of death from all cancers, including breast cancer, after 10 years (relative risk [RR] 1.02, 95% confidence interval [CI] 0.95–1.10) and the rate of death from all causes after 13 years (RR 0.99, 95% CI 0.95–1.03). Although screening lowered the breast cancer mortality rate, the authors argued that we should not look at only some outcomes and concluded that “breast cancer mortality was an unreliable outcome” that was biased in favor of screening, mainly because of “differential misclassification of cause of death.”13

Significance-chasing and selective reporting are common in observational studies

Black et al14 found that of 12 major cancer screening trials examining both disease-specific mortality and all-cause mortality, 5 had differences in mortality rates that went in opposite directions (eg, the rate of disease-specific mortality improved while overall survival was harmed, or vice-versa), suggesting paradoxical effects. In another 2 studies, differences in all-cause mortality exceeded gains in disease-specific mortality. Thus, in 7 (58%) of the 12 trials, inconsistencies existed between rates of disease-specific mortality and all-cause mortality, prompting doubt about the conclusions of the studies.14

For some cancers, data suggest that screening increases deaths from other causes, and these extra deaths are not included in the data on disease-specific mortality. For instance, men who are screened for prostate cancer have higher rates of death from cardiovascular disease and suicide,15 which might negate the tenuous benefits of screening in terms of deaths from prostate cancer.

Studies of screening for diseases other than cancer have also focused on only some outcomes. For example, the United States Preventive Services Task Force supports screening for abdominal aortic aneurysm once with ultrasonography in men ages 65 to 75 who have ever smoked,16 but the recommendation is based on improvements in the death rate from abdominal aortic aneurysm, not in all-cause mortality.17 This, along with a declining incidence of this disease and changes in how it is treated (with endovascular repair on the rise and open surgical repair declining), has led some to question if we should continue to screen for it.18

CHEMOPREVENTION: NO FREE LUNCH

Finasteride

In 2013, an analysis19 that looked at all of the outcomes laid to rest 10 years of debate over the Prostate Cancer Prevention Trial, which had randomized more than 18,000 healthy men over age 55 with no signs or symptoms of prostate cancer to receive finasteride or placebo, with the end point of prostate cancer incidence. The initial results, published in 2003,20 had found that the drug decreased the rate of incident prostate cancer but paradoxically increased the rate of high-grade (Gleason score ≥ 7) tumors. Whether these results were real or an artifact of ascertainment was debated, as was whether the adverse effects—decreases in sexual potency, libido, and ejaculation—were worth the 25% reduction in prostate cancer incidence.

Much of the debate ended with the 2013 publication, which showed that regardless of finasteride’s effect on prostate cancer, overall mortality curves at 18-year follow-up were absolutely indistinguishable.19 Healthy patients hoping that finasteride will help them live longer or better can be safely told that it does neither.

Statins as primary prevention

As for statin therapy as primary prevention, the best meta-analysis to date (which meticulously excluded secondary-prevention patients after analyzing individual patient-level data) found no improvement in overall mortality despite more than 240,000 patient-years of follow-up.21 Because of this, and because the harms of statin therapy are being increasingly (but still poorly) documented, widespread use of statins has been questioned.22

Proponents point to the ability of statins to reduce end points such as revascularization, stroke, and nonfatal myocardial infarction.23 But the main question facing healthy users is whether improvement in these end points translates to longer life or better quality of life. These questions remain unresolved.

Aspirin as primary prevention

Another example of the importance of considering all the outcomes is the issue of aspirin as primary prevention.

Enthusiasm for aspirin as primary prevention has been recently reinvigorated, with data showing it can prevent colorectal cancers that overexpress cyclooxygenase-2.24 But a meta-analysis of nine randomized trials of aspirin25 with more than 1,000 participants found that, although aspirin decreases the rate of nonfatal myocardial infarction (odds ratio [OR] 0.80, 95% CI 0.67–0.96), it does not significantly reduce cancer mortality (OR 0.93, 95% CI 0.84–1.03), and it increases the risk of nontrivial bleeding (OR 1.31; 95% CI 1.14–1.50). Its effects on overall mortality were not statistically significant but were possibly favorable (OR 0.94, 95% CI 0.88–1.00), so this requires further study.

After broad consideration of the risks and benefits of aspirin, the US Food and Drug Administration has issued a statement that aspirin is not recommended as primary prevention.26

 

 

WHY STUDIES LOOK ONLY AT SOME OUTCOMES

There are many reasons why researchers favor examining some outcomes over others, but there is no clear justification for ignoring overall mortality. Overall mortality should routinely be examined in large population studies of diet and supplements and in trials of medications27 and cancer screening.

Healthy people do not care about some outcomes; they care about all outcomes

With regard to large observational studies, it is hard to understand why some would not include survival analyses, unless the results would fail to support the study’s hypothesis. In fact, some studies do report overall survival results,28 but others do not. The omission of overall survival in large data-set research should raise concerns of multiple hypothesis testing and selective reporting. Eating peaches as opposed to grapefruit may not be associated with differences in rates of all-cause mortality, myocardial infarction, pneumonia, or lung cancer, but if you look at 20 different variables, chances are that one will have a P value less than .05, and an investigator might be tempted to report it as statistically significant and even meaningful.

Empirical studies support this claim. One group found that for 80% of ingredients randomly selected from a cookbook, there existed Medline-indexed articles assessing cancer risk, with 65% of studies finding nominally significant differences in the risk of some type of cancer.29

An excess of significant findings such as this argues that significance-chasing and selective reporting are common in this field and has led to calls for methodologic improvements, including routine falsification testing30 and up-front registration of observational studies.31

WHY ALL OUTCOMES MATTER

Healthy people do not care about some outcomes; they care about all outcomes. Some patients may truly have unique priorities (quality of life vs quantity of life), but others may overestimate their risk of death from some causes and underestimate their risk from others, and practitioners have the obligation to counsel them appropriately.

For instance, a patient who watches a brother pass away from pancreatic adenocarcinoma may wish to do everything possible to avoid that illness. But often, as in this case, fear may surpass risk. The patient’s risk of pancreatic cancer is no different than that in the general population: the best data show32 an odds ratio of 1.8, with a confidence interval spanning 1. As such, pancreatic cancer is still not among his five most likely causes of death.

Some patients may care about their bone mineral density or cholesterol level. But again, physicians have an obligation to direct patients’ attention to all of the outcomes that should be of interest to them.

OBJECTIONS TO INCLUDING ALL OUTCOMES

There are important objections to the argument I am presenting here.

First, including all outcomes is expensive. For studies involving retrospective analysis of existing data, looking at overall mortality would not incur additional costs, only an additional analysis. But for prospective trials to have statistical power to detect a difference in overall mortality, larger sample sizes or longer follow-up might be needed—either of which would add to the cost.

In chemoprevention trials, the rate of incident cancer has been called the gold standard end point.33 To design a thrifty chemoprevention study, investigators can either target a broad population and aim for incident malignancy, or target a restricted, high-risk population and aim for overall mortality. The latter is preferable because although it can inform the decisions of only some people, the former cannot inform any people, as was seen with difficulties in interpreting the Prostate Cancer Prevention Trial and trials showing reduced breast cancer incidence from tamoxifen, raloxifene, and exemestane.

In large cancer screening trials, the cost of powering the trial for overall mortality would be greater, and though a carefully selected, high-risk population can be enrolled, historically this has not been popular. In cancer screening, it is a mistake to contrast the costs of trials powered for overall mortality with those of lesser studies examining disease-specific death. Instead, we must consider the larger societal costs incurred by cancer screening that does not truly improve quantity or quality of life.34

The recent reversal of recommendations for prostate-specific antigen testing by the United States Preventive Services Task Force35 suggests that erroneous recommendations, practiced for decades, can cost society hundreds of billions of dollars but fail to improve meaningful outcomes.

The history of medicine is replete with examples of widely recommended practices and interventions that not only failed to improve the outcomes they claimed to improve, but at times increased the rate of all-cause mortality or carried harms that far outweighed benefits.36,37 The costs of conducting research to fully understand all outcomes are only a fraction of the costs of a practice that is widely disseminated.38

The history of medicine is replete with practices that harmed more than helped

A second objection to my analysis is that there is more to life than survival, and outcomes besides overall mortality are important. This is a self-evident truth. That an intervention improves the rate of overall mortality is neither necessary nor sufficient for its recommendation. Practices may improve survival but worsen quality of life to such a degree that they should not be recommended. Conversely, practices that improve quality of life should be endorsed even if they fail to prolong life.

Thus, overall mortality and quality of life must be considered together, but the end points that are favored currently (disease-specific death, incident cancer, diabetes mellitus, myocardial infarction) do not do a good job of capturing either. Disease-specific death is not meaningful to any patient if deaths from other causes are increased so that overall mortality is unchanged. Furthermore, preventing a diagnosis of cancer or diabetes may offer some psychological comfort, but well-crafted quality-of-life instruments are best suited to capture just how great that benefit is and whether it justifies the cost of such interventions, particularly if the rate of survival is unchanged.

Preventing stroke or myocardial infarction is important, but we should be cautious of interpreting data when decreasing the rate of these morbid events does not lead to commensurate improvements in survival. Alternatively, if morbid events are truly avoided but survival analyses are underpowered, quality-of-life measurements should demonstrate the benefit. But the end points currently used capture neither survival nor quality of life in a meaningful way.

WHEN ADVISING HEALTHY PEOPLE

Looking at all outcomes is important when caring for patients who are sick, but even more so for patients who are well. We need to know an intervention has a net benefit before we recommend it to a healthy person. Overall mortality should be reported routinely in this population, particularly in settings where the cost to do so is trivial (ie, in observational studies). Designers of thrifty trials should try to include people at high risk and power the trial for definite end points, rather than being broadly inclusive and reaching disputed conclusions. Research and decision-making should look at all outcomes. Healthy people deserve no less.

Before we dispense advice about staying healthy, we should know the effect of whatever we are recommending—be it diet, supplements, chemoprevention, or screening—on all meaningful outcomes, including overall mortality, quality of life, harms, inconveniences, and cost. Even though looking at all these outcomes may seem self-evidently wise, many research studies do not do it, and health care providers do not do it enough.

How would looking at all the outcomes change our opinion of health practices?

COMPARING GRAPEFRUIT AND PEACHES

A 2013 study linked eating berries with lower rates of myocardial infarction in women,1 another found that people who ate some fruits (blackberries and grapefruit) but not others (peaches and oranges) had a lower rate of incident diabetes,2 and a third linked a healthy diet to a lower incidence of pancreatic cancer.3 However, none of these studies examined all-cause mortality rates. A fourth study found that drinking green tea was associated with a lower risk of death from pneumonia in Japanese women, but not men.4

For the sake of argument, let us put aside concern about whether observational studies can reliably inform recommendations for clinical practice5 and concede that they can. The point is that studies such as those above look at some but not all meaningful outcomes, undermining the utility of their findings. If healthy people conclude that they should eat grapefruit instead of peaches, they may miss out on benefits of peaches that the study did not examine. Eating a healthy diet remains prudent, but the study linking it to a lower rate of pancreatic cancer is no tipping point, as pancreatic cancer is just one way to die. And advocating green tea to Japanese women but not men, to avoid pneumonia, would be a questionable public health strategy. Pneumonia is the sixth leading cause of death and accounts for 3.9% of disability-adjusted life-years lost,6 but what about the first five causes, which account for 96.1%?

We should know the effect of what we recommend on all meaningful outcomes

These and many other studies of dietary habits of people who are well fail to consider end points that healthy people care about. Suppose that drinking more coffee would prevent all deaths from pancreatic cancer but would modestly increase cardiovascular deaths—say, by 5%. On a population level, recommending more coffee would be wrong, because it would result in far more deaths. Suppose that drinking tea decreased deaths from pneumonia—we should still not advise patients to drink tea, as we do not know whether tea’s net effect is beneficial.

Some may argue that these epidemiologic studies are merely hypothesis-generating, but my colleagues and I analyzed all the nonrandomized studies published in several leading medical journals in 1 year and found that 59% made specific practice recommendations.5 Other studies may be misused in this fashion, even though the authors refrained from doing so.

CALCIUM PROTECTS BONES, BUT WHAT ABOUT THE HEART?

Narrow end points are not limited to dietary studies. Calcium supplementation with or without vitamin D has been vigorously promoted for decades7 to treat and prevent osteoporosis in pre- and postmenopausal women, and data confirm that these agents decrease the risk of fracture.8

But bone health is only one end point important to women, and long-term supplementation of a mineral or vitamin with the goal of strengthening bones may have unforeseen adverse effects.

In 2010, calcium supplementation without vitamin D was linked to higher rates of myocardial infarction (with some suggestion of increased rates of all-cause death) in pooled analyses of 15 trials.9 In 2011, a higher risk of cardiovascular events (stroke and myocardial infarction) was found in recipients of calcium with vitamin D in a reanalysis of the Women’s Health Initiative Calcium/Vitamin D Supplementation Study,10 adjusting for the widespread use of these supplements at baseline, and this was corroborated by a meta-analysis of eight other studies.10 A more recent study confirmed that supplemental calcium increases cardiovascular risk in men.11

Although the increase in cardiovascular risk seems to be modest, millions of people take calcium supplements; thus, many people may be harmed. Our exuberance for bone health suggests that, at times, a single outcome can distract.

DOES SCREENING IMPROVE SURVIVAL?

On the whole, the evidence for screening continues to focus only on certain outcomes. With the exception of the National Lung Cancer Screening Trial,12 to date, no cancer screening trial has shown an improvement in the overall survival rate.

In fact, a 2013 Cochrane review13 found that mammographic screening failed to lower the rate of death from all cancers, including breast cancer, after 10 years (relative risk [RR] 1.02, 95% confidence interval [CI] 0.95–1.10) and the rate of death from all causes after 13 years (RR 0.99, 95% CI 0.95–1.03). Although screening lowered the breast cancer mortality rate, the authors argued that we should not look at only some outcomes and concluded that “breast cancer mortality was an unreliable outcome” that was biased in favor of screening, mainly because of “differential misclassification of cause of death.”13

Significance-chasing and selective reporting are common in observational studies

Black et al14 found that of 12 major cancer screening trials examining both disease-specific mortality and all-cause mortality, 5 had differences in mortality rates that went in opposite directions (eg, the rate of disease-specific mortality improved while overall survival was harmed, or vice-versa), suggesting paradoxical effects. In another 2 studies, differences in all-cause mortality exceeded gains in disease-specific mortality. Thus, in 7 (58%) of the 12 trials, inconsistencies existed between rates of disease-specific mortality and all-cause mortality, prompting doubt about the conclusions of the studies.14

For some cancers, data suggest that screening increases deaths from other causes, and these extra deaths are not included in the data on disease-specific mortality. For instance, men who are screened for prostate cancer have higher rates of death from cardiovascular disease and suicide,15 which might negate the tenuous benefits of screening in terms of deaths from prostate cancer.

Studies of screening for diseases other than cancer have also focused on only some outcomes. For example, the United States Preventive Services Task Force supports screening for abdominal aortic aneurysm once with ultrasonography in men ages 65 to 75 who have ever smoked,16 but the recommendation is based on improvements in the death rate from abdominal aortic aneurysm, not in all-cause mortality.17 This, along with a declining incidence of this disease and changes in how it is treated (with endovascular repair on the rise and open surgical repair declining), has led some to question if we should continue to screen for it.18

CHEMOPREVENTION: NO FREE LUNCH

Finasteride

In 2013, an analysis19 that looked at all of the outcomes laid to rest 10 years of debate over the Prostate Cancer Prevention Trial, which had randomized more than 18,000 healthy men over age 55 with no signs or symptoms of prostate cancer to receive finasteride or placebo, with the end point of prostate cancer incidence. The initial results, published in 2003,20 had found that the drug decreased the rate of incident prostate cancer but paradoxically increased the rate of high-grade (Gleason score ≥ 7) tumors. Whether these results were real or an artifact of ascertainment was debated, as was whether the adverse effects—decreases in sexual potency, libido, and ejaculation—were worth the 25% reduction in prostate cancer incidence.

Much of the debate ended with the 2013 publication, which showed that regardless of finasteride’s effect on prostate cancer, overall mortality curves at 18-year follow-up were absolutely indistinguishable.19 Healthy patients hoping that finasteride will help them live longer or better can be safely told that it does neither.

Statins as primary prevention

As for statin therapy as primary prevention, the best meta-analysis to date (which meticulously excluded secondary-prevention patients after analyzing individual patient-level data) found no improvement in overall mortality despite more than 240,000 patient-years of follow-up.21 Because of this, and because the harms of statin therapy are being increasingly (but still poorly) documented, widespread use of statins has been questioned.22

Proponents point to the ability of statins to reduce end points such as revascularization, stroke, and nonfatal myocardial infarction.23 But the main question facing healthy users is whether improvement in these end points translates to longer life or better quality of life. These questions remain unresolved.

Aspirin as primary prevention

Another example of the importance of considering all the outcomes is the issue of aspirin as primary prevention.

Enthusiasm for aspirin as primary prevention has been recently reinvigorated, with data showing it can prevent colorectal cancers that overexpress cyclooxygenase-2.24 But a meta-analysis of nine randomized trials of aspirin25 with more than 1,000 participants found that, although aspirin decreases the rate of nonfatal myocardial infarction (odds ratio [OR] 0.80, 95% CI 0.67–0.96), it does not significantly reduce cancer mortality (OR 0.93, 95% CI 0.84–1.03), and it increases the risk of nontrivial bleeding (OR 1.31; 95% CI 1.14–1.50). Its effects on overall mortality were not statistically significant but were possibly favorable (OR 0.94, 95% CI 0.88–1.00), so this requires further study.

After broad consideration of the risks and benefits of aspirin, the US Food and Drug Administration has issued a statement that aspirin is not recommended as primary prevention.26

 

 

WHY STUDIES LOOK ONLY AT SOME OUTCOMES

There are many reasons why researchers favor examining some outcomes over others, but there is no clear justification for ignoring overall mortality. Overall mortality should routinely be examined in large population studies of diet and supplements and in trials of medications27 and cancer screening.

Healthy people do not care about some outcomes; they care about all outcomes

With regard to large observational studies, it is hard to understand why some would not include survival analyses, unless the results would fail to support the study’s hypothesis. In fact, some studies do report overall survival results,28 but others do not. The omission of overall survival in large data-set research should raise concerns of multiple hypothesis testing and selective reporting. Eating peaches as opposed to grapefruit may not be associated with differences in rates of all-cause mortality, myocardial infarction, pneumonia, or lung cancer, but if you look at 20 different variables, chances are that one will have a P value less than .05, and an investigator might be tempted to report it as statistically significant and even meaningful.

Empirical studies support this claim. One group found that for 80% of ingredients randomly selected from a cookbook, there existed Medline-indexed articles assessing cancer risk, with 65% of studies finding nominally significant differences in the risk of some type of cancer.29

An excess of significant findings such as this argues that significance-chasing and selective reporting are common in this field and has led to calls for methodologic improvements, including routine falsification testing30 and up-front registration of observational studies.31

WHY ALL OUTCOMES MATTER

Healthy people do not care about some outcomes; they care about all outcomes. Some patients may truly have unique priorities (quality of life vs quantity of life), but others may overestimate their risk of death from some causes and underestimate their risk from others, and practitioners have the obligation to counsel them appropriately.

For instance, a patient who watches a brother pass away from pancreatic adenocarcinoma may wish to do everything possible to avoid that illness. But often, as in this case, fear may surpass risk. The patient’s risk of pancreatic cancer is no different than that in the general population: the best data show32 an odds ratio of 1.8, with a confidence interval spanning 1. As such, pancreatic cancer is still not among his five most likely causes of death.

Some patients may care about their bone mineral density or cholesterol level. But again, physicians have an obligation to direct patients’ attention to all of the outcomes that should be of interest to them.

OBJECTIONS TO INCLUDING ALL OUTCOMES

There are important objections to the argument I am presenting here.

First, including all outcomes is expensive. For studies involving retrospective analysis of existing data, looking at overall mortality would not incur additional costs, only an additional analysis. But for prospective trials to have statistical power to detect a difference in overall mortality, larger sample sizes or longer follow-up might be needed—either of which would add to the cost.

In chemoprevention trials, the rate of incident cancer has been called the gold standard end point.33 To design a thrifty chemoprevention study, investigators can either target a broad population and aim for incident malignancy, or target a restricted, high-risk population and aim for overall mortality. The latter is preferable because although it can inform the decisions of only some people, the former cannot inform any people, as was seen with difficulties in interpreting the Prostate Cancer Prevention Trial and trials showing reduced breast cancer incidence from tamoxifen, raloxifene, and exemestane.

In large cancer screening trials, the cost of powering the trial for overall mortality would be greater, and though a carefully selected, high-risk population can be enrolled, historically this has not been popular. In cancer screening, it is a mistake to contrast the costs of trials powered for overall mortality with those of lesser studies examining disease-specific death. Instead, we must consider the larger societal costs incurred by cancer screening that does not truly improve quantity or quality of life.34

The recent reversal of recommendations for prostate-specific antigen testing by the United States Preventive Services Task Force35 suggests that erroneous recommendations, practiced for decades, can cost society hundreds of billions of dollars but fail to improve meaningful outcomes.

The history of medicine is replete with examples of widely recommended practices and interventions that not only failed to improve the outcomes they claimed to improve, but at times increased the rate of all-cause mortality or carried harms that far outweighed benefits.36,37 The costs of conducting research to fully understand all outcomes are only a fraction of the costs of a practice that is widely disseminated.38

The history of medicine is replete with practices that harmed more than helped

A second objection to my analysis is that there is more to life than survival, and outcomes besides overall mortality are important. This is a self-evident truth. That an intervention improves the rate of overall mortality is neither necessary nor sufficient for its recommendation. Practices may improve survival but worsen quality of life to such a degree that they should not be recommended. Conversely, practices that improve quality of life should be endorsed even if they fail to prolong life.

Thus, overall mortality and quality of life must be considered together, but the end points that are favored currently (disease-specific death, incident cancer, diabetes mellitus, myocardial infarction) do not do a good job of capturing either. Disease-specific death is not meaningful to any patient if deaths from other causes are increased so that overall mortality is unchanged. Furthermore, preventing a diagnosis of cancer or diabetes may offer some psychological comfort, but well-crafted quality-of-life instruments are best suited to capture just how great that benefit is and whether it justifies the cost of such interventions, particularly if the rate of survival is unchanged.

Preventing stroke or myocardial infarction is important, but we should be cautious of interpreting data when decreasing the rate of these morbid events does not lead to commensurate improvements in survival. Alternatively, if morbid events are truly avoided but survival analyses are underpowered, quality-of-life measurements should demonstrate the benefit. But the end points currently used capture neither survival nor quality of life in a meaningful way.

WHEN ADVISING HEALTHY PEOPLE

Looking at all outcomes is important when caring for patients who are sick, but even more so for patients who are well. We need to know an intervention has a net benefit before we recommend it to a healthy person. Overall mortality should be reported routinely in this population, particularly in settings where the cost to do so is trivial (ie, in observational studies). Designers of thrifty trials should try to include people at high risk and power the trial for definite end points, rather than being broadly inclusive and reaching disputed conclusions. Research and decision-making should look at all outcomes. Healthy people deserve no less.

References
  1. Cassidy A, Mukamal KJ, Liu L, Franz M, Eliassen AH, Rimm EB. High anthocyanin intake is associated with a reduced risk of myocardial infarction in young and middle-aged women. Circulation 2013; 127:188–196.
  2. Muraki I, Imamura F, Manson JE, et al. Fruit consumption and risk of type 2 diabetes: results from three prospective longitudinal cohort studies. BMJ 2013; 347:f5001.
  3. Arem H, Reedy J, Sampson J, et al. The Healthy Eating Index 2005 and risk for pancreatic cancer in the NIH-AARP study. J Natl Cancer Inst 2013; 105:1298–1305.
  4. Watanabe I, Kuriyama S, Kakizaki M, et al. Green tea and death from pneumonia in Japan: the Ohsaki cohort study. Am J Clin Nutr 2009; 90:672–679.
  5. Prasad V, Jorgenson J, Ioannidis JP, Cifu A. Observational studies often make clinical practice recommendations: an empirical evaluation of authors’ attitudes. J Clin Epidemiol 2013; 66:361–366.e4.
  6. Murray CJ, Vos T, Lozano R, et al. Disability-adjusted life years (DALYs) for 291 diseases and injuries in 21 regions, 1990-2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012; 380:2197–2223.
  7. Eastell R. Treatment of postmenopausal osteoporosis. N Engl J Med 1998; 338:736–746.
  8. Looker AC. Interaction of science, consumer practices and policy: calcium and bone health as a case study. J Nutr 2003; 133:1987S–1991S.
  9. Bolland MJ, Avenell A, Baron JA, et al. Effect of calcium supplements on risk of myocardial infarction and cardiovascular events: meta-analysis. BMJ 2010; 341:c3691
  10. Bolland MJ, Grey A, Avenell A, Gamble GD, Reid IR. Calcium supplements with or without vitamin D and risk of cardiovascular events: reanalysis of the Women’s Health Initiative limited access dataset and meta-analysis. BMJ 2011; 342:d2040.
  11. Xiao Q, Murphy RA, Houston DK, Harris TB, Chow WH, Park Y. Dietary and supplemental calcium intake and cardiovascular disease mortality: the National Institutes of Health-AARP diet and health study. JAMA Intern Med 2013; 173:639–646.
  12. The National Lung Screening Trial Research Team. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med 2011; 365:395–409.
  13. Gøtzsche PC, Jørgensen KJ. Screening for breast cancer with mammography. Cochrane Database Syst Rev 2013 Jun 4;6:CD001877.
  14. Black WC, Haggstrom DA, Welch HG. All-cause mortality in randomized trials of cancer screening. J Natl Cancer Inst 2002; 94:167–173.
  15. Fall K, Fang F, Mucci LA, et al. Immediate risk for cardiovascular events and suicide following a prostate cancer diagnosis: prospective cohort study. PLoS Med 2009; 6:e1000197.
  16. Prasad V. An unmeasured harm of screening. Arch Intern Med 2012; 172:1442–1443.
  17. Guirguis-Blake JM, Beil TL, Senger CA, Whitlock EP. Ultrasonography screening for abdominal aortic aneurysms: a sytematic evidence review for the U.S. Preventive Services Task Force. Ann Intern Med 2014; 160:321–329.
  18. Harris R, Sheridan S, Kinsinger L. Time to rethink screening for abdominal aortic aneurysm? Arch Intern Med 2012; 172:1462–1463.
  19. Thompson IM Jr, Goodman PJ, Tangen CM, et al. Long-term survival of participants in the prostate cancer prevention trial. N Engl J Med 2013; 369:603–610.
  20. Thompson IM, Goodman PJ, Tangen CM, et al. The influence of finasteride on the development of prostate cancer. N Engl J Med 2003; 349:216–224.
  21. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:1024–1031.
  22. Redberg RF, Katz MH. Healthy men should not take statins. JAMA 2012; 307:1491–1492.
  23. McEvoy JW, Blumenthal RS, Blaha MJ. Statin therapy for hyperlipidemia. JAMA 2013; 310:1184–1185.
  24. Chan AT, Ogino S, Fuchs CS. Aspirin and the risk of colorectal cancer in relation to the expression of COX-2. N Engl J Med 2007; 356:2131–2142.
  25. Seshasai SR, Wijesuriya S, Sivakumaran R, et al. Effect of aspirin on vascular and nonvascular outcomes: meta-analysis of randomized controlled trials. Arch Intern Med 2012; 172:209–216.
  26. US Food and Drug Administration. Use of aspirin for primary prevention of heart attack and stroke. www.fda.gov/Drugs/ResourcesForYou/Consumers/ucm390574.htm. Accessed February 5, 2015.
  27. Ioannidis JP. Mega-trials for blockbusters. JAMA 2013; 309:239–240.
  28. Dunkler D, Dehghan M, Teo KK, et al; ONTARGET Investigators. Diet and kidney disease in high-risk individuals with type 2 diabetes mellitus. JAMA Intern Med 2013; 173:1682–1692.
  29. Schoenfeld JD, Ioannidis JP. Is everything we eat associated with cancer? A systematic cookbook review. Am J Clin Nutr 2013; 97:127–134.
  30. Prasad V, Jena AB. Prespecified falsification end points: can they validate true observational associations? JAMA 2013; 309:241–242.
  31. Ioannidis JPA. The importance of potential studies that have not existed and registration of observational data sets. JAMA 2012; 308:575–576.
  32. Klein AP, Brune KA, Petersen GM, et al. Prospective risk of pancreatic cancer in familial pancreatic cancer kindreds. Cancer Res 2004; 64:2634–2638.
  33. William WN Jr, Papadimitrakopoulou VA. Optimizing biomarkers and endpoints in oral cancer chemoprevention trials. Cancer Prev Res (Phila) 2013; 6:375–378.
  34. Prasad V. Powering cancer screening for overall mortality. Ecancermedicalscience 2013 Oct 9; 7:ed27.
  35. US Preventive Services Task Force. Final recommendation statement. Prostate cancer: screening. http://www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/prostate-cancer-screening. Accessed February 5, 2015.
  36. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:37–38.
  37. Prasad V, Vandross A, Toomey C, et al. A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin Proc 2013; 88:790–798.
  38. Elshaug AG, Garber AM. How CER could pay for itself—insights from vertebral fracture treatments. N Engl J Med 2011; 364:1390–1393.
References
  1. Cassidy A, Mukamal KJ, Liu L, Franz M, Eliassen AH, Rimm EB. High anthocyanin intake is associated with a reduced risk of myocardial infarction in young and middle-aged women. Circulation 2013; 127:188–196.
  2. Muraki I, Imamura F, Manson JE, et al. Fruit consumption and risk of type 2 diabetes: results from three prospective longitudinal cohort studies. BMJ 2013; 347:f5001.
  3. Arem H, Reedy J, Sampson J, et al. The Healthy Eating Index 2005 and risk for pancreatic cancer in the NIH-AARP study. J Natl Cancer Inst 2013; 105:1298–1305.
  4. Watanabe I, Kuriyama S, Kakizaki M, et al. Green tea and death from pneumonia in Japan: the Ohsaki cohort study. Am J Clin Nutr 2009; 90:672–679.
  5. Prasad V, Jorgenson J, Ioannidis JP, Cifu A. Observational studies often make clinical practice recommendations: an empirical evaluation of authors’ attitudes. J Clin Epidemiol 2013; 66:361–366.e4.
  6. Murray CJ, Vos T, Lozano R, et al. Disability-adjusted life years (DALYs) for 291 diseases and injuries in 21 regions, 1990-2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012; 380:2197–2223.
  7. Eastell R. Treatment of postmenopausal osteoporosis. N Engl J Med 1998; 338:736–746.
  8. Looker AC. Interaction of science, consumer practices and policy: calcium and bone health as a case study. J Nutr 2003; 133:1987S–1991S.
  9. Bolland MJ, Avenell A, Baron JA, et al. Effect of calcium supplements on risk of myocardial infarction and cardiovascular events: meta-analysis. BMJ 2010; 341:c3691
  10. Bolland MJ, Grey A, Avenell A, Gamble GD, Reid IR. Calcium supplements with or without vitamin D and risk of cardiovascular events: reanalysis of the Women’s Health Initiative limited access dataset and meta-analysis. BMJ 2011; 342:d2040.
  11. Xiao Q, Murphy RA, Houston DK, Harris TB, Chow WH, Park Y. Dietary and supplemental calcium intake and cardiovascular disease mortality: the National Institutes of Health-AARP diet and health study. JAMA Intern Med 2013; 173:639–646.
  12. The National Lung Screening Trial Research Team. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med 2011; 365:395–409.
  13. Gøtzsche PC, Jørgensen KJ. Screening for breast cancer with mammography. Cochrane Database Syst Rev 2013 Jun 4;6:CD001877.
  14. Black WC, Haggstrom DA, Welch HG. All-cause mortality in randomized trials of cancer screening. J Natl Cancer Inst 2002; 94:167–173.
  15. Fall K, Fang F, Mucci LA, et al. Immediate risk for cardiovascular events and suicide following a prostate cancer diagnosis: prospective cohort study. PLoS Med 2009; 6:e1000197.
  16. Prasad V. An unmeasured harm of screening. Arch Intern Med 2012; 172:1442–1443.
  17. Guirguis-Blake JM, Beil TL, Senger CA, Whitlock EP. Ultrasonography screening for abdominal aortic aneurysms: a sytematic evidence review for the U.S. Preventive Services Task Force. Ann Intern Med 2014; 160:321–329.
  18. Harris R, Sheridan S, Kinsinger L. Time to rethink screening for abdominal aortic aneurysm? Arch Intern Med 2012; 172:1462–1463.
  19. Thompson IM Jr, Goodman PJ, Tangen CM, et al. Long-term survival of participants in the prostate cancer prevention trial. N Engl J Med 2013; 369:603–610.
  20. Thompson IM, Goodman PJ, Tangen CM, et al. The influence of finasteride on the development of prostate cancer. N Engl J Med 2003; 349:216–224.
  21. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:1024–1031.
  22. Redberg RF, Katz MH. Healthy men should not take statins. JAMA 2012; 307:1491–1492.
  23. McEvoy JW, Blumenthal RS, Blaha MJ. Statin therapy for hyperlipidemia. JAMA 2013; 310:1184–1185.
  24. Chan AT, Ogino S, Fuchs CS. Aspirin and the risk of colorectal cancer in relation to the expression of COX-2. N Engl J Med 2007; 356:2131–2142.
  25. Seshasai SR, Wijesuriya S, Sivakumaran R, et al. Effect of aspirin on vascular and nonvascular outcomes: meta-analysis of randomized controlled trials. Arch Intern Med 2012; 172:209–216.
  26. US Food and Drug Administration. Use of aspirin for primary prevention of heart attack and stroke. www.fda.gov/Drugs/ResourcesForYou/Consumers/ucm390574.htm. Accessed February 5, 2015.
  27. Ioannidis JP. Mega-trials for blockbusters. JAMA 2013; 309:239–240.
  28. Dunkler D, Dehghan M, Teo KK, et al; ONTARGET Investigators. Diet and kidney disease in high-risk individuals with type 2 diabetes mellitus. JAMA Intern Med 2013; 173:1682–1692.
  29. Schoenfeld JD, Ioannidis JP. Is everything we eat associated with cancer? A systematic cookbook review. Am J Clin Nutr 2013; 97:127–134.
  30. Prasad V, Jena AB. Prespecified falsification end points: can they validate true observational associations? JAMA 2013; 309:241–242.
  31. Ioannidis JPA. The importance of potential studies that have not existed and registration of observational data sets. JAMA 2012; 308:575–576.
  32. Klein AP, Brune KA, Petersen GM, et al. Prospective risk of pancreatic cancer in familial pancreatic cancer kindreds. Cancer Res 2004; 64:2634–2638.
  33. William WN Jr, Papadimitrakopoulou VA. Optimizing biomarkers and endpoints in oral cancer chemoprevention trials. Cancer Prev Res (Phila) 2013; 6:375–378.
  34. Prasad V. Powering cancer screening for overall mortality. Ecancermedicalscience 2013 Oct 9; 7:ed27.
  35. US Preventive Services Task Force. Final recommendation statement. Prostate cancer: screening. http://www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/prostate-cancer-screening. Accessed February 5, 2015.
  36. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:37–38.
  37. Prasad V, Vandross A, Toomey C, et al. A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin Proc 2013; 88:790–798.
  38. Elshaug AG, Garber AM. How CER could pay for itself—insights from vertebral fracture treatments. N Engl J Med 2011; 364:1390–1393.
Issue
Cleveland Clinic Journal of Medicine - 82(3)
Issue
Cleveland Clinic Journal of Medicine - 82(3)
Page Number
146-150
Page Number
146-150
Publications
Publications
Topics
Article Type
Display Headline
But how many people died? Health outcomes in perspective
Display Headline
But how many people died? Health outcomes in perspective
Legacy Keywords
clinical trials, grapefruit, peaches, calcium, fractures, mammography, breast cancer, prostate cancer, statins, aspirin, significance-chasing, Vinay Prasad
Legacy Keywords
clinical trials, grapefruit, peaches, calcium, fractures, mammography, breast cancer, prostate cancer, statins, aspirin, significance-chasing, Vinay Prasad
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Dense breasts and legislating medicine

Article Type
Changed
Mon, 09/25/2017 - 15:08
Display Headline
Dense breasts and legislating medicine

Recently, Nevada,1 North Carolina, and Oregon joined a number of other US states (as of this writing, nine other states) in enacting laws that require informing women if they have dense breast tissue detected on mammography.2 Laws are pending in other states. Federal legislation has also been introduced in the US House of Representatives.

See related commentary

THE POWER OF ADVOCACY TO CHANGE MEDICAL PRACTICE

One such bill3 was introduced as a result of the advocacy of a single patient, Nancy Cappello, a Connecticut woman who was not informed that she had dense breasts and was later found to have node-positive breast cancer.4

While new medical practices are rarely credited to the efforts of single physician or researcher, these “dense-breast laws” show the power a single patient may play in health care. The evidence behind these laws and their implications bring to the forefront the role of advocacy and legislation in the practice of medicine.

Dense-breast laws are the latest chapter in how legislative action can change the practice of medicine. Proof that advocacy could use law to change medical practice emerged in the early 1990s in the wake of AIDS activism. Patient-advocacy activists lobbied for early access to investigational agents, arguing that traditional pathways of clinical testing would deprive terminally ill patients of potentially lifesaving treatments. These efforts led the US Food and Drug Administration (FDA) to create the Accelerated Approval Program, which allows new drugs to garner approval based on surrogate end-point data for terminal or neglected diseases. Accelerated approval was codified into law in 1997 in the FDA’s Modernization Act.5 In 2012, legislative action further broadened the ability of the FDA to approve new products based on surrogate data,6 with the FDA’s Safety and Innovation Act, which provides for first-time approval of a drug based on “pharmacologic” end points that are even more limited.6

Although proponents have declared success when legislative action lowers the bar for drug and device approval, independent analyses have been more critical. In 2009, accelerated approval underwent significant scrutiny when the Government Accountability Office issued a report summarizing 16 years of the program.7 Over the program’s life span, the FDA called for 144 postmarketing studies, but more than one-third of these remained incomplete. Moreover, in 13 years, the FDA never exercised its power to expedite the withdrawal of a drug from the market.

Many accelerated approvals have created considerable controversy. Bevacizumab for metastatic breast cancer was ultimately found to confer no survival benefit, and its approval was revoked.8 Gemtuzumab ozogamicin for acute myeloid leukemia may be effective, but not at the dose that was approved.9 And midodrine hydrochloride and many other drugs remain untested.10

DOES THIS INFORMATION HELP PATIENTS? WHAT WOULD THEY DO WITH IT?

The question with dense-breast laws is similar to that facing other legal efforts to change medicine: Does it actually help patents? Will the information doctors disclose lead to appropriate interventions that improve health outcomes, or, instead, lead to cascades of testing and biopsies that worsen overdiagnosis?

Like accelerated approval, mandating disclosure of breast density is an intervention with uncertain efficacy. While increased breast density has been shown to increase a woman’s risk of developing breast cancer, it is also neutral regarding a woman’s chances of dying of breast cancer.11 In other words, it does not identify patients who experience aggressive disease.

Next comes the larger question of what women would do with this information. Will they simply be more compliant with existing screening recommendations, or will they seek additional testing? This is where the greatest uncertainty lies. The utility of additional testing with ultrasonography or magnetic resonance imaging (MRI) remains uncertain in this population. We will certainly find more cancers if we use MRI to screen women, but it remains unclear if this translates to improved outcomes.

A recent study shows just this.12 In Connecticut, breast density notification is mandatory, as is insurance coverage for screening (or whole-breast) ultrasonography. Since the passage of these laws, the Yale Medical Center has screened 935 women with dense breasts using ultrasonography. Over this time, they performed roughly 16,000 mammograms; thus, the breast density law applied roughly to 1 out of 16 (6.25%) studies. Of the 935 women, biopsies were performed in 54 (5.8%). These were mostly needle biopsies (46), but 3 patients underwent surgical excision, and five cysts were aspirated. From these efforts, two sub-centimeter cancers were found and one case of ductal carcinoma in situ was found. Thus, only 3.7% of women undergoing biopsy and fewer than 1% of women undergoing ultrasonography were found to have cancer.

Of course, given the nature of this study, we cannot know what would have happened without referral and testing. However, empirical research suggests that detecting a breast cancer with screening does not mean a life was saved.13 In fact, only a minority of such women (13%) can credit screening with a survival gain.13

In a study14 that compared women with dense breasts who underwent annual vs biannual screening, no difference in the rate of advanced or metastatic disease was seen with more frequent screening, but the rates of false-positive results and biopsies were higher.14

Notably, dense-breast legislation comes at a time when fundamental questions have been raised about the impact of screening on breast cancer. A prominent study of trends in US breast cancer incidence and death rates over the last 30 years shows that even under the most favorable assumptions, mammography has led to a huge surplus in the diagnosis of breast cancer but little change in the breast cancer mortality rate.15 It is entirely possible that more-aggressive screening in women with dense breasts will only exacerbate this problem. Advocacy may harm rather than help these patients.

We are often told that laws such as the dense-breast bills are motivated by the public’s desire and patient advocacy. However, we are unsure if the vocal proponents of dense-breast laws represent the average women’s desires. These efforts may simply be another case of how a vocal and passionate minority can overcome a large and indifferent majority.16

LEGISLATING MEDICAL PRACTICE IS A BOLD STEP

Dense-breast laws present an additional challenge: they cannot be changed as quickly as scientific understanding. In other words, if the medical field comes to believe that notification is generally harmful because it leads to increased biopsies but not better health, can the law be changed rapidly enough to reflect this? There is a large precedent for the reversal of medical practices,17,18 particularly those based on scant evidence, including cases of recommended screening tests (most notably, recent changes to prostate-specific antigen guidelines). But in all these other cases, law did not mandate the practice or recommendation. Laws are often slow to adapt to changes in understanding.

Legislating medical practice is a bold step, and even those who feel it is occasionally warranted must hold themselves to a rational guiding principle. We have incontrovertible evidence that flexible sigmoidoscopy can reduce the number of deaths from colorectal cancer, but no state mandates that doctors inform their patients of this fact. A patient’s ejection fraction serves as a marker of benefit for several lifesaving drugs and devices, yet no state mandates that physicians disclose this information to patients after echocardiography.

All of us in health care—physicians, researchers, nurses, practitioners, and patients—are patient advocates, and we all want policies that promote human health. However, doing so means adhering to practices grounded in evidence. Dense-breast laws serve as a reminder that good intentions and good people may be necessary—but are not sufficient—for sound policy.

References
  1. Nevada Legislature. Requires the notification of patients regarding breast density. (BDR 40-172). http://www.leg.state.nv.us/Session/77th2013/Reports/history.cfm?ID=371. Accessed November 7, 2013.
  2. ImagingBIZ Newswire. Nevada Governor Signs Breast Density Law June 10, 2013. http://www.imagingbiz.com/articles/news/nevada-governor-signs-breast-density-law. Accessed August 1, 2013.
  3. Are You Dense Advocacy. H.R.3102Latest 112th Congress. Breast Density and Mammography Reporting Act of 2011 http://www.congressweb.com/areyoudenseadvocacy/Bills/Detail/id/12734. Accessed November 7, 2013.
  4. The New York Times. New Laws Add a Divisive Component to Breast Screening. http://www.nytimes.com/2012/10/25/health/laws-tell-mammogram-clinics-to-address-breast-density.html?pagewanted=all. Accessed November 7, 2013.
  5. Reichert JM. Trends in development and approval times for new therapeutics in the United States. Nat Rev Drug Discov 2003; 2:695702.
  6. Kramer DB, Kesselheim AS. User fees and beyond—the FDA Safety and Innovation Act of 2012. N Engl J Med 2012; 367:12771279.
  7. US Government Accountability Office (GAO). New Drug Approval: FDA Needs to Enhance Its Oversight of Drugs Approved on the Basis of Surrogate Endpoints. GAO-09-866. http://www.gao.gov/products/GAO-09-866. Accessed November 7, 2013.
  8. Ocaña A, Amir E, Vera F, Eisenhauer EA, Tannock IF. Addition of bevacizumab to chemotherapy for treatment of solid tumors: similar results but different conclusions. J Clin Oncol 2011; 29:254256.
  9. Rowe JM, Löwenberg B. Gemtuzumab ozogamicin in acute myeloid leukemia: a remarkable saga about an active drug. Blood 2013; 121:48384841.
  10. Dhruva SS, Redberg RF. Accelerated approval and possible withdrawal of midodrine. JAMA 2010; 304:21722173.
  11. Gierach GL, Ichikawa L, Kerlikowske K, et al. Relationship between mammographic density and breast cancer death in the Breast Cancer Surveillance Consortium. J Natl Cancer Inst 2012; 104:12181227.
  12. Hooley RJ, Greenberg KL, Stackhouse RM, Geisel JL, Butler RS, Philpotts LE. Screening US in patients with mammographically dense breasts: initial experience with Connecticut Public Act 09-41. Radiology 2012; 265:5969.
  13. Welch HG, Frankel BA. Likelihood that a woman with screen-detected breast cancer has had her “life saved” by that screening. Arch Intern Med 2011; 171:20432046.
  14. Kerlikowske K, Zhu W, Hubbard RA, et al; Breast Cancer Surveillance Consortium. Outcomes of screening mammography by frequency, breast density, and postmenopausal hormone therapy. JAMA Intern Med 2013; 173:807816.
  15. Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med 2012; 367:19982005.
  16. New York Review of Books. Facing the Real Gun Problem. http://www.nybooks.com/articles/archives/2013/jun/20/facing-real-gunproblem. Accessed November 7, 2013.
  17. Prasad V, Gall V, Cifu A. The frequency of medical reversal. Arch Intern Med 2011; 171:16751676.
  18. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:3738.
Article PDF
Author and Disclosure Information

Nancy Ho, MD
Division of Gastroenterology, Department of Medicine, University of Maryland, Baltimore

Julie Kim, MD
Department of Medicine, Northwestern University, Chicago, IL

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr. 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

The views and opinions of Dr. Prasad do not necessarily reflect those of the National Cancer Institute or the National Institutes of Health.

Issue
Cleveland Clinic Journal of Medicine - 80(12)
Publications
Topics
Page Number
768-770
Sections
Author and Disclosure Information

Nancy Ho, MD
Division of Gastroenterology, Department of Medicine, University of Maryland, Baltimore

Julie Kim, MD
Department of Medicine, Northwestern University, Chicago, IL

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr. 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

The views and opinions of Dr. Prasad do not necessarily reflect those of the National Cancer Institute or the National Institutes of Health.

Author and Disclosure Information

Nancy Ho, MD
Division of Gastroenterology, Department of Medicine, University of Maryland, Baltimore

Julie Kim, MD
Department of Medicine, Northwestern University, Chicago, IL

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr. 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

The views and opinions of Dr. Prasad do not necessarily reflect those of the National Cancer Institute or the National Institutes of Health.

Article PDF
Article PDF

Recently, Nevada,1 North Carolina, and Oregon joined a number of other US states (as of this writing, nine other states) in enacting laws that require informing women if they have dense breast tissue detected on mammography.2 Laws are pending in other states. Federal legislation has also been introduced in the US House of Representatives.

See related commentary

THE POWER OF ADVOCACY TO CHANGE MEDICAL PRACTICE

One such bill3 was introduced as a result of the advocacy of a single patient, Nancy Cappello, a Connecticut woman who was not informed that she had dense breasts and was later found to have node-positive breast cancer.4

While new medical practices are rarely credited to the efforts of single physician or researcher, these “dense-breast laws” show the power a single patient may play in health care. The evidence behind these laws and their implications bring to the forefront the role of advocacy and legislation in the practice of medicine.

Dense-breast laws are the latest chapter in how legislative action can change the practice of medicine. Proof that advocacy could use law to change medical practice emerged in the early 1990s in the wake of AIDS activism. Patient-advocacy activists lobbied for early access to investigational agents, arguing that traditional pathways of clinical testing would deprive terminally ill patients of potentially lifesaving treatments. These efforts led the US Food and Drug Administration (FDA) to create the Accelerated Approval Program, which allows new drugs to garner approval based on surrogate end-point data for terminal or neglected diseases. Accelerated approval was codified into law in 1997 in the FDA’s Modernization Act.5 In 2012, legislative action further broadened the ability of the FDA to approve new products based on surrogate data,6 with the FDA’s Safety and Innovation Act, which provides for first-time approval of a drug based on “pharmacologic” end points that are even more limited.6

Although proponents have declared success when legislative action lowers the bar for drug and device approval, independent analyses have been more critical. In 2009, accelerated approval underwent significant scrutiny when the Government Accountability Office issued a report summarizing 16 years of the program.7 Over the program’s life span, the FDA called for 144 postmarketing studies, but more than one-third of these remained incomplete. Moreover, in 13 years, the FDA never exercised its power to expedite the withdrawal of a drug from the market.

Many accelerated approvals have created considerable controversy. Bevacizumab for metastatic breast cancer was ultimately found to confer no survival benefit, and its approval was revoked.8 Gemtuzumab ozogamicin for acute myeloid leukemia may be effective, but not at the dose that was approved.9 And midodrine hydrochloride and many other drugs remain untested.10

DOES THIS INFORMATION HELP PATIENTS? WHAT WOULD THEY DO WITH IT?

The question with dense-breast laws is similar to that facing other legal efforts to change medicine: Does it actually help patents? Will the information doctors disclose lead to appropriate interventions that improve health outcomes, or, instead, lead to cascades of testing and biopsies that worsen overdiagnosis?

Like accelerated approval, mandating disclosure of breast density is an intervention with uncertain efficacy. While increased breast density has been shown to increase a woman’s risk of developing breast cancer, it is also neutral regarding a woman’s chances of dying of breast cancer.11 In other words, it does not identify patients who experience aggressive disease.

Next comes the larger question of what women would do with this information. Will they simply be more compliant with existing screening recommendations, or will they seek additional testing? This is where the greatest uncertainty lies. The utility of additional testing with ultrasonography or magnetic resonance imaging (MRI) remains uncertain in this population. We will certainly find more cancers if we use MRI to screen women, but it remains unclear if this translates to improved outcomes.

A recent study shows just this.12 In Connecticut, breast density notification is mandatory, as is insurance coverage for screening (or whole-breast) ultrasonography. Since the passage of these laws, the Yale Medical Center has screened 935 women with dense breasts using ultrasonography. Over this time, they performed roughly 16,000 mammograms; thus, the breast density law applied roughly to 1 out of 16 (6.25%) studies. Of the 935 women, biopsies were performed in 54 (5.8%). These were mostly needle biopsies (46), but 3 patients underwent surgical excision, and five cysts were aspirated. From these efforts, two sub-centimeter cancers were found and one case of ductal carcinoma in situ was found. Thus, only 3.7% of women undergoing biopsy and fewer than 1% of women undergoing ultrasonography were found to have cancer.

Of course, given the nature of this study, we cannot know what would have happened without referral and testing. However, empirical research suggests that detecting a breast cancer with screening does not mean a life was saved.13 In fact, only a minority of such women (13%) can credit screening with a survival gain.13

In a study14 that compared women with dense breasts who underwent annual vs biannual screening, no difference in the rate of advanced or metastatic disease was seen with more frequent screening, but the rates of false-positive results and biopsies were higher.14

Notably, dense-breast legislation comes at a time when fundamental questions have been raised about the impact of screening on breast cancer. A prominent study of trends in US breast cancer incidence and death rates over the last 30 years shows that even under the most favorable assumptions, mammography has led to a huge surplus in the diagnosis of breast cancer but little change in the breast cancer mortality rate.15 It is entirely possible that more-aggressive screening in women with dense breasts will only exacerbate this problem. Advocacy may harm rather than help these patients.

We are often told that laws such as the dense-breast bills are motivated by the public’s desire and patient advocacy. However, we are unsure if the vocal proponents of dense-breast laws represent the average women’s desires. These efforts may simply be another case of how a vocal and passionate minority can overcome a large and indifferent majority.16

LEGISLATING MEDICAL PRACTICE IS A BOLD STEP

Dense-breast laws present an additional challenge: they cannot be changed as quickly as scientific understanding. In other words, if the medical field comes to believe that notification is generally harmful because it leads to increased biopsies but not better health, can the law be changed rapidly enough to reflect this? There is a large precedent for the reversal of medical practices,17,18 particularly those based on scant evidence, including cases of recommended screening tests (most notably, recent changes to prostate-specific antigen guidelines). But in all these other cases, law did not mandate the practice or recommendation. Laws are often slow to adapt to changes in understanding.

Legislating medical practice is a bold step, and even those who feel it is occasionally warranted must hold themselves to a rational guiding principle. We have incontrovertible evidence that flexible sigmoidoscopy can reduce the number of deaths from colorectal cancer, but no state mandates that doctors inform their patients of this fact. A patient’s ejection fraction serves as a marker of benefit for several lifesaving drugs and devices, yet no state mandates that physicians disclose this information to patients after echocardiography.

All of us in health care—physicians, researchers, nurses, practitioners, and patients—are patient advocates, and we all want policies that promote human health. However, doing so means adhering to practices grounded in evidence. Dense-breast laws serve as a reminder that good intentions and good people may be necessary—but are not sufficient—for sound policy.

Recently, Nevada,1 North Carolina, and Oregon joined a number of other US states (as of this writing, nine other states) in enacting laws that require informing women if they have dense breast tissue detected on mammography.2 Laws are pending in other states. Federal legislation has also been introduced in the US House of Representatives.

See related commentary

THE POWER OF ADVOCACY TO CHANGE MEDICAL PRACTICE

One such bill3 was introduced as a result of the advocacy of a single patient, Nancy Cappello, a Connecticut woman who was not informed that she had dense breasts and was later found to have node-positive breast cancer.4

While new medical practices are rarely credited to the efforts of single physician or researcher, these “dense-breast laws” show the power a single patient may play in health care. The evidence behind these laws and their implications bring to the forefront the role of advocacy and legislation in the practice of medicine.

Dense-breast laws are the latest chapter in how legislative action can change the practice of medicine. Proof that advocacy could use law to change medical practice emerged in the early 1990s in the wake of AIDS activism. Patient-advocacy activists lobbied for early access to investigational agents, arguing that traditional pathways of clinical testing would deprive terminally ill patients of potentially lifesaving treatments. These efforts led the US Food and Drug Administration (FDA) to create the Accelerated Approval Program, which allows new drugs to garner approval based on surrogate end-point data for terminal or neglected diseases. Accelerated approval was codified into law in 1997 in the FDA’s Modernization Act.5 In 2012, legislative action further broadened the ability of the FDA to approve new products based on surrogate data,6 with the FDA’s Safety and Innovation Act, which provides for first-time approval of a drug based on “pharmacologic” end points that are even more limited.6

Although proponents have declared success when legislative action lowers the bar for drug and device approval, independent analyses have been more critical. In 2009, accelerated approval underwent significant scrutiny when the Government Accountability Office issued a report summarizing 16 years of the program.7 Over the program’s life span, the FDA called for 144 postmarketing studies, but more than one-third of these remained incomplete. Moreover, in 13 years, the FDA never exercised its power to expedite the withdrawal of a drug from the market.

Many accelerated approvals have created considerable controversy. Bevacizumab for metastatic breast cancer was ultimately found to confer no survival benefit, and its approval was revoked.8 Gemtuzumab ozogamicin for acute myeloid leukemia may be effective, but not at the dose that was approved.9 And midodrine hydrochloride and many other drugs remain untested.10

DOES THIS INFORMATION HELP PATIENTS? WHAT WOULD THEY DO WITH IT?

The question with dense-breast laws is similar to that facing other legal efforts to change medicine: Does it actually help patents? Will the information doctors disclose lead to appropriate interventions that improve health outcomes, or, instead, lead to cascades of testing and biopsies that worsen overdiagnosis?

Like accelerated approval, mandating disclosure of breast density is an intervention with uncertain efficacy. While increased breast density has been shown to increase a woman’s risk of developing breast cancer, it is also neutral regarding a woman’s chances of dying of breast cancer.11 In other words, it does not identify patients who experience aggressive disease.

Next comes the larger question of what women would do with this information. Will they simply be more compliant with existing screening recommendations, or will they seek additional testing? This is where the greatest uncertainty lies. The utility of additional testing with ultrasonography or magnetic resonance imaging (MRI) remains uncertain in this population. We will certainly find more cancers if we use MRI to screen women, but it remains unclear if this translates to improved outcomes.

A recent study shows just this.12 In Connecticut, breast density notification is mandatory, as is insurance coverage for screening (or whole-breast) ultrasonography. Since the passage of these laws, the Yale Medical Center has screened 935 women with dense breasts using ultrasonography. Over this time, they performed roughly 16,000 mammograms; thus, the breast density law applied roughly to 1 out of 16 (6.25%) studies. Of the 935 women, biopsies were performed in 54 (5.8%). These were mostly needle biopsies (46), but 3 patients underwent surgical excision, and five cysts were aspirated. From these efforts, two sub-centimeter cancers were found and one case of ductal carcinoma in situ was found. Thus, only 3.7% of women undergoing biopsy and fewer than 1% of women undergoing ultrasonography were found to have cancer.

Of course, given the nature of this study, we cannot know what would have happened without referral and testing. However, empirical research suggests that detecting a breast cancer with screening does not mean a life was saved.13 In fact, only a minority of such women (13%) can credit screening with a survival gain.13

In a study14 that compared women with dense breasts who underwent annual vs biannual screening, no difference in the rate of advanced or metastatic disease was seen with more frequent screening, but the rates of false-positive results and biopsies were higher.14

Notably, dense-breast legislation comes at a time when fundamental questions have been raised about the impact of screening on breast cancer. A prominent study of trends in US breast cancer incidence and death rates over the last 30 years shows that even under the most favorable assumptions, mammography has led to a huge surplus in the diagnosis of breast cancer but little change in the breast cancer mortality rate.15 It is entirely possible that more-aggressive screening in women with dense breasts will only exacerbate this problem. Advocacy may harm rather than help these patients.

We are often told that laws such as the dense-breast bills are motivated by the public’s desire and patient advocacy. However, we are unsure if the vocal proponents of dense-breast laws represent the average women’s desires. These efforts may simply be another case of how a vocal and passionate minority can overcome a large and indifferent majority.16

LEGISLATING MEDICAL PRACTICE IS A BOLD STEP

Dense-breast laws present an additional challenge: they cannot be changed as quickly as scientific understanding. In other words, if the medical field comes to believe that notification is generally harmful because it leads to increased biopsies but not better health, can the law be changed rapidly enough to reflect this? There is a large precedent for the reversal of medical practices,17,18 particularly those based on scant evidence, including cases of recommended screening tests (most notably, recent changes to prostate-specific antigen guidelines). But in all these other cases, law did not mandate the practice or recommendation. Laws are often slow to adapt to changes in understanding.

Legislating medical practice is a bold step, and even those who feel it is occasionally warranted must hold themselves to a rational guiding principle. We have incontrovertible evidence that flexible sigmoidoscopy can reduce the number of deaths from colorectal cancer, but no state mandates that doctors inform their patients of this fact. A patient’s ejection fraction serves as a marker of benefit for several lifesaving drugs and devices, yet no state mandates that physicians disclose this information to patients after echocardiography.

All of us in health care—physicians, researchers, nurses, practitioners, and patients—are patient advocates, and we all want policies that promote human health. However, doing so means adhering to practices grounded in evidence. Dense-breast laws serve as a reminder that good intentions and good people may be necessary—but are not sufficient—for sound policy.

References
  1. Nevada Legislature. Requires the notification of patients regarding breast density. (BDR 40-172). http://www.leg.state.nv.us/Session/77th2013/Reports/history.cfm?ID=371. Accessed November 7, 2013.
  2. ImagingBIZ Newswire. Nevada Governor Signs Breast Density Law June 10, 2013. http://www.imagingbiz.com/articles/news/nevada-governor-signs-breast-density-law. Accessed August 1, 2013.
  3. Are You Dense Advocacy. H.R.3102Latest 112th Congress. Breast Density and Mammography Reporting Act of 2011 http://www.congressweb.com/areyoudenseadvocacy/Bills/Detail/id/12734. Accessed November 7, 2013.
  4. The New York Times. New Laws Add a Divisive Component to Breast Screening. http://www.nytimes.com/2012/10/25/health/laws-tell-mammogram-clinics-to-address-breast-density.html?pagewanted=all. Accessed November 7, 2013.
  5. Reichert JM. Trends in development and approval times for new therapeutics in the United States. Nat Rev Drug Discov 2003; 2:695702.
  6. Kramer DB, Kesselheim AS. User fees and beyond—the FDA Safety and Innovation Act of 2012. N Engl J Med 2012; 367:12771279.
  7. US Government Accountability Office (GAO). New Drug Approval: FDA Needs to Enhance Its Oversight of Drugs Approved on the Basis of Surrogate Endpoints. GAO-09-866. http://www.gao.gov/products/GAO-09-866. Accessed November 7, 2013.
  8. Ocaña A, Amir E, Vera F, Eisenhauer EA, Tannock IF. Addition of bevacizumab to chemotherapy for treatment of solid tumors: similar results but different conclusions. J Clin Oncol 2011; 29:254256.
  9. Rowe JM, Löwenberg B. Gemtuzumab ozogamicin in acute myeloid leukemia: a remarkable saga about an active drug. Blood 2013; 121:48384841.
  10. Dhruva SS, Redberg RF. Accelerated approval and possible withdrawal of midodrine. JAMA 2010; 304:21722173.
  11. Gierach GL, Ichikawa L, Kerlikowske K, et al. Relationship between mammographic density and breast cancer death in the Breast Cancer Surveillance Consortium. J Natl Cancer Inst 2012; 104:12181227.
  12. Hooley RJ, Greenberg KL, Stackhouse RM, Geisel JL, Butler RS, Philpotts LE. Screening US in patients with mammographically dense breasts: initial experience with Connecticut Public Act 09-41. Radiology 2012; 265:5969.
  13. Welch HG, Frankel BA. Likelihood that a woman with screen-detected breast cancer has had her “life saved” by that screening. Arch Intern Med 2011; 171:20432046.
  14. Kerlikowske K, Zhu W, Hubbard RA, et al; Breast Cancer Surveillance Consortium. Outcomes of screening mammography by frequency, breast density, and postmenopausal hormone therapy. JAMA Intern Med 2013; 173:807816.
  15. Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med 2012; 367:19982005.
  16. New York Review of Books. Facing the Real Gun Problem. http://www.nybooks.com/articles/archives/2013/jun/20/facing-real-gunproblem. Accessed November 7, 2013.
  17. Prasad V, Gall V, Cifu A. The frequency of medical reversal. Arch Intern Med 2011; 171:16751676.
  18. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:3738.
References
  1. Nevada Legislature. Requires the notification of patients regarding breast density. (BDR 40-172). http://www.leg.state.nv.us/Session/77th2013/Reports/history.cfm?ID=371. Accessed November 7, 2013.
  2. ImagingBIZ Newswire. Nevada Governor Signs Breast Density Law June 10, 2013. http://www.imagingbiz.com/articles/news/nevada-governor-signs-breast-density-law. Accessed August 1, 2013.
  3. Are You Dense Advocacy. H.R.3102Latest 112th Congress. Breast Density and Mammography Reporting Act of 2011 http://www.congressweb.com/areyoudenseadvocacy/Bills/Detail/id/12734. Accessed November 7, 2013.
  4. The New York Times. New Laws Add a Divisive Component to Breast Screening. http://www.nytimes.com/2012/10/25/health/laws-tell-mammogram-clinics-to-address-breast-density.html?pagewanted=all. Accessed November 7, 2013.
  5. Reichert JM. Trends in development and approval times for new therapeutics in the United States. Nat Rev Drug Discov 2003; 2:695702.
  6. Kramer DB, Kesselheim AS. User fees and beyond—the FDA Safety and Innovation Act of 2012. N Engl J Med 2012; 367:12771279.
  7. US Government Accountability Office (GAO). New Drug Approval: FDA Needs to Enhance Its Oversight of Drugs Approved on the Basis of Surrogate Endpoints. GAO-09-866. http://www.gao.gov/products/GAO-09-866. Accessed November 7, 2013.
  8. Ocaña A, Amir E, Vera F, Eisenhauer EA, Tannock IF. Addition of bevacizumab to chemotherapy for treatment of solid tumors: similar results but different conclusions. J Clin Oncol 2011; 29:254256.
  9. Rowe JM, Löwenberg B. Gemtuzumab ozogamicin in acute myeloid leukemia: a remarkable saga about an active drug. Blood 2013; 121:48384841.
  10. Dhruva SS, Redberg RF. Accelerated approval and possible withdrawal of midodrine. JAMA 2010; 304:21722173.
  11. Gierach GL, Ichikawa L, Kerlikowske K, et al. Relationship between mammographic density and breast cancer death in the Breast Cancer Surveillance Consortium. J Natl Cancer Inst 2012; 104:12181227.
  12. Hooley RJ, Greenberg KL, Stackhouse RM, Geisel JL, Butler RS, Philpotts LE. Screening US in patients with mammographically dense breasts: initial experience with Connecticut Public Act 09-41. Radiology 2012; 265:5969.
  13. Welch HG, Frankel BA. Likelihood that a woman with screen-detected breast cancer has had her “life saved” by that screening. Arch Intern Med 2011; 171:20432046.
  14. Kerlikowske K, Zhu W, Hubbard RA, et al; Breast Cancer Surveillance Consortium. Outcomes of screening mammography by frequency, breast density, and postmenopausal hormone therapy. JAMA Intern Med 2013; 173:807816.
  15. Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med 2012; 367:19982005.
  16. New York Review of Books. Facing the Real Gun Problem. http://www.nybooks.com/articles/archives/2013/jun/20/facing-real-gunproblem. Accessed November 7, 2013.
  17. Prasad V, Gall V, Cifu A. The frequency of medical reversal. Arch Intern Med 2011; 171:16751676.
  18. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:3738.
Issue
Cleveland Clinic Journal of Medicine - 80(12)
Issue
Cleveland Clinic Journal of Medicine - 80(12)
Page Number
768-770
Page Number
768-770
Publications
Publications
Topics
Article Type
Display Headline
Dense breasts and legislating medicine
Display Headline
Dense breasts and legislating medicine
Sections
Disallow All Ads
Alternative CME
Article PDF Media

The overdiagnosis of pneumonia

Article Type
Changed
Mon, 09/25/2017 - 13:55
Display Headline
The overdiagnosis of pneumonia

Pneumonia was once considered the “old man’s friend,” but in the modern world, has it become the physician’s?

See related editorial

The definition of pneumonia has increasingly been stretched, and physicians occasionally make the diagnosis without canonical signs or symptoms, or even with negative chest radiography. The hallmark of overdiagnosis is identifying illness for which treatment is not needed or is not helpful, and some cases of pneumonia likely fit this description. Empirical evidence over the last 3 decades shows a sustained increase in the diagnosis of pneumonia, but little evidence of a decrease in the rates of pneumonia morbidity and mortality. The central problem with pneumonia is one common to many diagnoses, such as pulmonary embolism, coronary artery disease, and infectious conditions—diagnostic criteria remain divorced from outcomes data. Linking the two has the potential to improve the evidence base of medicine.

Like many long-recognized diagnoses, pneumonia lacks a standardized definition. Most physicians believe that although fever, cough, sputum production, dyspnea, and pleurisy are hallmark symptoms, confirmatory chest radiography is needed to cement the diagnosis.1 But what if a patient has only a fever, cough, and infiltrate? What if the infiltrate is not visible on radiography, but only on computed tomography (CT)? And what if the patient has a cough but is afebrile and has nonspecific findings on CT?

THE RATE OF HOSPITAL ADMISSIONS FOR PNEUMONIA IS RISING

In current clinical practice, any or all of the above cases are called pneumonia. The pneumonia label, once applied, justifies the use of antibiotics, which patients or physicians may overtly desire. One prospective observational study of six hospitals found that 21% of patients admitted with pneumonia and 43% of those treated as outpatients had negative chest radiographs.2 Empirical evidence suggests that the incidence of these “soft” diagnoses may be growing in number.

In the United States, hospitalizations with discharge codes listing pneumonia increased 20% from the late 1980s to the early 2000s.3 The rates of hospitalization for the 10 other most frequent causes of admission did not change significantly over this same period, suggesting a selective increase in hospital admissions for pneumonia.

This focus on pneumonia would be justified if it led to a proportionate benefit for pneumonia outcomes. However, in the same data set, the risk of death from pneumonia did not improve more than that from the other 10 common conditions—all improved similarly—and the rate of discharge from the hospital to a long-term care facility was unchanged. We are hospitalizing more patients with pneumonia, but this has not improved outcomes beyond global trends in mortality.

Data from England suggest that overdiagnosis may be a worldwide phenomenon. Between 1997 and 2005, hospitalization rates in England for pneumonia, adjusted for age, increased 34% from 1.48 to 1.98 per 1,000 persons.4 The 30-day in-hospital death rate for pneumonia remained about the same over this period. In the absence of a paradigm-shifting technology, one that would alter hospitalization practices, or an environmental cause of increased incidence—and with pneumonia there has been neither—the most likely explanation for these documented trends is that hospitals are admitting patients with pneumonia that is less severe.

Finally, data from the 2000s that at first seemed to reverse the trend of increasing hospitalizations for pneumonia have been reanalyzed to account for alternative coding.5 For instance, a pneumonia admission may be coded with respiratory failure as the primary diagnosis and pneumonia as the secondary diagnosis. Examining data from large populations from 2002 to 2009, and correcting as such, shows that the incidence of pneumonia has reached a plateau or has declined only slightly from the elevated rates of the early 2000s. The death rate remains unchanged.

 

 

PNEUMONIA: A DIAGNOSIS IN THE EYE OF THE BEHOLDER

Apparently, when it comes to pneumonia, the diagnosis is in the eye of the beholder. Different physicians have different thresholds for applying the label. In the wake of quality efforts to ensure that emergency physicians deliver antibiotics within 4 hours, emergency doctors have been shown to have worse accuracy in diagnosing pneumonia.1 But worse accuracy compared with what standard?

In an investigation by Welker et al,1 the standard definition of pneumonia was based on the one favored by the US Food and Drug Administration for clinical trials. Patients had to have all of the following:

  • A new or increasing infiltrate on radiography or CT
  • A fever, an elevated white blood cell count, or a shift to immature polymorphonuclear leukocytes
  • At least two signs or symptoms of the condition (eg, cough, dyspnea, egophany).

Although this definition is reasonable and ensures homogeneity in clinical trials, it is not steadfastly adhered to in clinical practice and has never been shown to cleanly delineate a population that benefits from antibiotics.

Another challenge to devising a perfect definition of pneumonia is the lack of a pathologic gold standard. Based on a review of 17,340 Medicare patients hospitalized for community-acquired pneumonia, microbial confirmation is often of little assistance, and a probable pathogen is identified in only 7.6% of cases.6

RATES OF OUTPATIENT DIAGNOSIS ARE LIKELY SIMILAR

Thus far, we have examined trends in inpatient diagnosis but not those of outpatient diagnosis. There is no well-done observational study that documents outpatient trends, but there is little reason to suppose the trends are different. Risk-scoring systems in pneumonia, such as the PORT7 and the CURB-65,8 have been designed to decrease unnecessary inpatient admissions, but they do not lend clarity to the diagnosis itself.

The central problem with pneumonia, as with many long-recognized clinical conditions, is that the diagnosis is separated from the treatment. In other words, although physicians are confident that antibiotics benefit patients who have what Sir William Osler would have called pneumonia (elevated white blood cell count, fever, cough, dyspnea, pleurisy, egophany, lobular infiltrate), we don’t know whether the treatment benefits patients whose pneumonia would have been unrecognizable decades ago (with cough, low-grade fever, and infiltrate on CT alone). Improvements in imaging may exacerbate the problem. In this sense, pneumonia exists on a spectrum, as do many medical diagnoses. Not all cases are equally severe, and some may not deserve to be labeled as pneumonia.

No randomized trial has compared antibiotics against supportive care in pneumonia, and, likely, no such trial is needed for clear cases. However, with the growing number of soft diagnoses, randomized trials are desperately needed to delineate where harms outweigh benefits, and where the fuzzy edge of the pneumonia diagnosis must end. And as is always the case with studies that challenge a standard of care, null results should prompt further trials.

WELL-DESIGNED TRIALS COULD END THE UNCERTAINTY

In the next few years, clinical trials, rationally planned, may end most of the uncertainty regarding pneumonia.

Existing observational data may be used to identify groups of patients who, in today’s world, are diagnosed with pneumonia but who do exceptionally well (eg, younger patients with fewer comorbidities, who present with low-grade fever but no signs of consolidation on physical examination, and with dubious results on chest radiography). These are patients for whom equipoise exists, and randomized trials should compare a strategy of antibiotics with a strategy of best supportive care. Trials should be powered for patient-centered outcomes, such as the duration and the complications of illness. The death rate should be scrupulously recorded.

Patients whose pneumonia would have been unrecognizable decades ago should be another target population for the trials I propose.

In a short time, pneumonia may become synonymous with a set of factors for lung infection that predict who will benefit from antibiotics, and who can be safely followed. Already, we are moving toward this standard in other diseases.9 For pulmonary embolism, ongoing trials are testing if anticoagulation can be safely omitted in patients with subsegmental clots (clinicaltrials.gov identifier NCT01455818). Such trials are, at last, translating old diagnoses into the language of evidence-based medicine.

For patients with pneumonia who are not hospitalized, the current outpatient therapy is based on data from studies that show a low rate of failure with empiric treatment based on consideration of the common pathogens for this condition, with few patients subsequently requiring hospitalization. Today, this reasoning is inadequate. The basis for any therapy must be proven benefit for patients with a defined condition compared with a lesser strategy. Data already demonstrate that a short course of antibiotics is no worse than a long course for many hospitalized and outpatients with pneumonia,10,11 but many other patients may require no treatment at all. The time has come to find out.

References
  1. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med 2008; 168:351356.
  2. Marrie TJ, Huang JQ. Low-risk patients admitted with community-acquired pneumonia. Am J Med 2005; 118:13571363.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ. Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988–2002. JAMA 2005; 294:27122719.
  4. Trotter CL, Stuart JM, George R, Miller E. Increasing hospital admissions for pneumonia, England. Emerg Infect Dis 2008; 14:727733.
  5. Lindenauer PK, Lagu T, Shieh MS, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003–2009. JAMA 2012; 307:14051413.
  6. Bartlett JG. Diagnostic tests for agents of community-acquired pneumonia. Clin Infect Dis 2011; 52(suppl 4):S296S304.
  7. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med 1997; 336:242250.
  8. Lim W, van der Eerden MM, Laing R, et al. Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study. Thorax 2003; 58:377382.
  9. Prasad V, Rho J, Cifu A. The diagnosis and treatment of pulmonary embolism: a metaphor for medicine in the evidence-based medicine era. Arch Intern Med 2012; 172:955958.
  10. Singh N, Rogers P, Atwood CW, Wagener MM, Yu VL. Short-course empiric antibiotic therapy for patients with pulmonary infiltrates in the intensive care unit. A proposed solution for indiscriminate antibiotic prescription. Am J Respir Crit Care Med 2000; 162:505511.
  11. Li JZ, Winston LG, Moore DH, Bent S. Efficacy of short-course antibiotic regimens for community-acquired pneumonia: a meta-analysis. Am J Med 2007; 120:783790.
Article PDF
Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr. 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

The views and opinions of the author do not necessarily reflect those of the organizations with which he is affiliated.

Issue
Cleveland Clinic Journal of Medicine - 80(10)
Publications
Topics
Page Number
616-618
Sections
Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr. 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

The views and opinions of the author do not necessarily reflect those of the organizations with which he is affiliated.

Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr. 10/12N226, Bethesda, MD 20892; e-mail: [email protected]

The views and opinions of the author do not necessarily reflect those of the organizations with which he is affiliated.

Article PDF
Article PDF

Pneumonia was once considered the “old man’s friend,” but in the modern world, has it become the physician’s?

See related editorial

The definition of pneumonia has increasingly been stretched, and physicians occasionally make the diagnosis without canonical signs or symptoms, or even with negative chest radiography. The hallmark of overdiagnosis is identifying illness for which treatment is not needed or is not helpful, and some cases of pneumonia likely fit this description. Empirical evidence over the last 3 decades shows a sustained increase in the diagnosis of pneumonia, but little evidence of a decrease in the rates of pneumonia morbidity and mortality. The central problem with pneumonia is one common to many diagnoses, such as pulmonary embolism, coronary artery disease, and infectious conditions—diagnostic criteria remain divorced from outcomes data. Linking the two has the potential to improve the evidence base of medicine.

Like many long-recognized diagnoses, pneumonia lacks a standardized definition. Most physicians believe that although fever, cough, sputum production, dyspnea, and pleurisy are hallmark symptoms, confirmatory chest radiography is needed to cement the diagnosis.1 But what if a patient has only a fever, cough, and infiltrate? What if the infiltrate is not visible on radiography, but only on computed tomography (CT)? And what if the patient has a cough but is afebrile and has nonspecific findings on CT?

THE RATE OF HOSPITAL ADMISSIONS FOR PNEUMONIA IS RISING

In current clinical practice, any or all of the above cases are called pneumonia. The pneumonia label, once applied, justifies the use of antibiotics, which patients or physicians may overtly desire. One prospective observational study of six hospitals found that 21% of patients admitted with pneumonia and 43% of those treated as outpatients had negative chest radiographs.2 Empirical evidence suggests that the incidence of these “soft” diagnoses may be growing in number.

In the United States, hospitalizations with discharge codes listing pneumonia increased 20% from the late 1980s to the early 2000s.3 The rates of hospitalization for the 10 other most frequent causes of admission did not change significantly over this same period, suggesting a selective increase in hospital admissions for pneumonia.

This focus on pneumonia would be justified if it led to a proportionate benefit for pneumonia outcomes. However, in the same data set, the risk of death from pneumonia did not improve more than that from the other 10 common conditions—all improved similarly—and the rate of discharge from the hospital to a long-term care facility was unchanged. We are hospitalizing more patients with pneumonia, but this has not improved outcomes beyond global trends in mortality.

Data from England suggest that overdiagnosis may be a worldwide phenomenon. Between 1997 and 2005, hospitalization rates in England for pneumonia, adjusted for age, increased 34% from 1.48 to 1.98 per 1,000 persons.4 The 30-day in-hospital death rate for pneumonia remained about the same over this period. In the absence of a paradigm-shifting technology, one that would alter hospitalization practices, or an environmental cause of increased incidence—and with pneumonia there has been neither—the most likely explanation for these documented trends is that hospitals are admitting patients with pneumonia that is less severe.

Finally, data from the 2000s that at first seemed to reverse the trend of increasing hospitalizations for pneumonia have been reanalyzed to account for alternative coding.5 For instance, a pneumonia admission may be coded with respiratory failure as the primary diagnosis and pneumonia as the secondary diagnosis. Examining data from large populations from 2002 to 2009, and correcting as such, shows that the incidence of pneumonia has reached a plateau or has declined only slightly from the elevated rates of the early 2000s. The death rate remains unchanged.

 

 

PNEUMONIA: A DIAGNOSIS IN THE EYE OF THE BEHOLDER

Apparently, when it comes to pneumonia, the diagnosis is in the eye of the beholder. Different physicians have different thresholds for applying the label. In the wake of quality efforts to ensure that emergency physicians deliver antibiotics within 4 hours, emergency doctors have been shown to have worse accuracy in diagnosing pneumonia.1 But worse accuracy compared with what standard?

In an investigation by Welker et al,1 the standard definition of pneumonia was based on the one favored by the US Food and Drug Administration for clinical trials. Patients had to have all of the following:

  • A new or increasing infiltrate on radiography or CT
  • A fever, an elevated white blood cell count, or a shift to immature polymorphonuclear leukocytes
  • At least two signs or symptoms of the condition (eg, cough, dyspnea, egophany).

Although this definition is reasonable and ensures homogeneity in clinical trials, it is not steadfastly adhered to in clinical practice and has never been shown to cleanly delineate a population that benefits from antibiotics.

Another challenge to devising a perfect definition of pneumonia is the lack of a pathologic gold standard. Based on a review of 17,340 Medicare patients hospitalized for community-acquired pneumonia, microbial confirmation is often of little assistance, and a probable pathogen is identified in only 7.6% of cases.6

RATES OF OUTPATIENT DIAGNOSIS ARE LIKELY SIMILAR

Thus far, we have examined trends in inpatient diagnosis but not those of outpatient diagnosis. There is no well-done observational study that documents outpatient trends, but there is little reason to suppose the trends are different. Risk-scoring systems in pneumonia, such as the PORT7 and the CURB-65,8 have been designed to decrease unnecessary inpatient admissions, but they do not lend clarity to the diagnosis itself.

The central problem with pneumonia, as with many long-recognized clinical conditions, is that the diagnosis is separated from the treatment. In other words, although physicians are confident that antibiotics benefit patients who have what Sir William Osler would have called pneumonia (elevated white blood cell count, fever, cough, dyspnea, pleurisy, egophany, lobular infiltrate), we don’t know whether the treatment benefits patients whose pneumonia would have been unrecognizable decades ago (with cough, low-grade fever, and infiltrate on CT alone). Improvements in imaging may exacerbate the problem. In this sense, pneumonia exists on a spectrum, as do many medical diagnoses. Not all cases are equally severe, and some may not deserve to be labeled as pneumonia.

No randomized trial has compared antibiotics against supportive care in pneumonia, and, likely, no such trial is needed for clear cases. However, with the growing number of soft diagnoses, randomized trials are desperately needed to delineate where harms outweigh benefits, and where the fuzzy edge of the pneumonia diagnosis must end. And as is always the case with studies that challenge a standard of care, null results should prompt further trials.

WELL-DESIGNED TRIALS COULD END THE UNCERTAINTY

In the next few years, clinical trials, rationally planned, may end most of the uncertainty regarding pneumonia.

Existing observational data may be used to identify groups of patients who, in today’s world, are diagnosed with pneumonia but who do exceptionally well (eg, younger patients with fewer comorbidities, who present with low-grade fever but no signs of consolidation on physical examination, and with dubious results on chest radiography). These are patients for whom equipoise exists, and randomized trials should compare a strategy of antibiotics with a strategy of best supportive care. Trials should be powered for patient-centered outcomes, such as the duration and the complications of illness. The death rate should be scrupulously recorded.

Patients whose pneumonia would have been unrecognizable decades ago should be another target population for the trials I propose.

In a short time, pneumonia may become synonymous with a set of factors for lung infection that predict who will benefit from antibiotics, and who can be safely followed. Already, we are moving toward this standard in other diseases.9 For pulmonary embolism, ongoing trials are testing if anticoagulation can be safely omitted in patients with subsegmental clots (clinicaltrials.gov identifier NCT01455818). Such trials are, at last, translating old diagnoses into the language of evidence-based medicine.

For patients with pneumonia who are not hospitalized, the current outpatient therapy is based on data from studies that show a low rate of failure with empiric treatment based on consideration of the common pathogens for this condition, with few patients subsequently requiring hospitalization. Today, this reasoning is inadequate. The basis for any therapy must be proven benefit for patients with a defined condition compared with a lesser strategy. Data already demonstrate that a short course of antibiotics is no worse than a long course for many hospitalized and outpatients with pneumonia,10,11 but many other patients may require no treatment at all. The time has come to find out.

Pneumonia was once considered the “old man’s friend,” but in the modern world, has it become the physician’s?

See related editorial

The definition of pneumonia has increasingly been stretched, and physicians occasionally make the diagnosis without canonical signs or symptoms, or even with negative chest radiography. The hallmark of overdiagnosis is identifying illness for which treatment is not needed or is not helpful, and some cases of pneumonia likely fit this description. Empirical evidence over the last 3 decades shows a sustained increase in the diagnosis of pneumonia, but little evidence of a decrease in the rates of pneumonia morbidity and mortality. The central problem with pneumonia is one common to many diagnoses, such as pulmonary embolism, coronary artery disease, and infectious conditions—diagnostic criteria remain divorced from outcomes data. Linking the two has the potential to improve the evidence base of medicine.

Like many long-recognized diagnoses, pneumonia lacks a standardized definition. Most physicians believe that although fever, cough, sputum production, dyspnea, and pleurisy are hallmark symptoms, confirmatory chest radiography is needed to cement the diagnosis.1 But what if a patient has only a fever, cough, and infiltrate? What if the infiltrate is not visible on radiography, but only on computed tomography (CT)? And what if the patient has a cough but is afebrile and has nonspecific findings on CT?

THE RATE OF HOSPITAL ADMISSIONS FOR PNEUMONIA IS RISING

In current clinical practice, any or all of the above cases are called pneumonia. The pneumonia label, once applied, justifies the use of antibiotics, which patients or physicians may overtly desire. One prospective observational study of six hospitals found that 21% of patients admitted with pneumonia and 43% of those treated as outpatients had negative chest radiographs.2 Empirical evidence suggests that the incidence of these “soft” diagnoses may be growing in number.

In the United States, hospitalizations with discharge codes listing pneumonia increased 20% from the late 1980s to the early 2000s.3 The rates of hospitalization for the 10 other most frequent causes of admission did not change significantly over this same period, suggesting a selective increase in hospital admissions for pneumonia.

This focus on pneumonia would be justified if it led to a proportionate benefit for pneumonia outcomes. However, in the same data set, the risk of death from pneumonia did not improve more than that from the other 10 common conditions—all improved similarly—and the rate of discharge from the hospital to a long-term care facility was unchanged. We are hospitalizing more patients with pneumonia, but this has not improved outcomes beyond global trends in mortality.

Data from England suggest that overdiagnosis may be a worldwide phenomenon. Between 1997 and 2005, hospitalization rates in England for pneumonia, adjusted for age, increased 34% from 1.48 to 1.98 per 1,000 persons.4 The 30-day in-hospital death rate for pneumonia remained about the same over this period. In the absence of a paradigm-shifting technology, one that would alter hospitalization practices, or an environmental cause of increased incidence—and with pneumonia there has been neither—the most likely explanation for these documented trends is that hospitals are admitting patients with pneumonia that is less severe.

Finally, data from the 2000s that at first seemed to reverse the trend of increasing hospitalizations for pneumonia have been reanalyzed to account for alternative coding.5 For instance, a pneumonia admission may be coded with respiratory failure as the primary diagnosis and pneumonia as the secondary diagnosis. Examining data from large populations from 2002 to 2009, and correcting as such, shows that the incidence of pneumonia has reached a plateau or has declined only slightly from the elevated rates of the early 2000s. The death rate remains unchanged.

 

 

PNEUMONIA: A DIAGNOSIS IN THE EYE OF THE BEHOLDER

Apparently, when it comes to pneumonia, the diagnosis is in the eye of the beholder. Different physicians have different thresholds for applying the label. In the wake of quality efforts to ensure that emergency physicians deliver antibiotics within 4 hours, emergency doctors have been shown to have worse accuracy in diagnosing pneumonia.1 But worse accuracy compared with what standard?

In an investigation by Welker et al,1 the standard definition of pneumonia was based on the one favored by the US Food and Drug Administration for clinical trials. Patients had to have all of the following:

  • A new or increasing infiltrate on radiography or CT
  • A fever, an elevated white blood cell count, or a shift to immature polymorphonuclear leukocytes
  • At least two signs or symptoms of the condition (eg, cough, dyspnea, egophany).

Although this definition is reasonable and ensures homogeneity in clinical trials, it is not steadfastly adhered to in clinical practice and has never been shown to cleanly delineate a population that benefits from antibiotics.

Another challenge to devising a perfect definition of pneumonia is the lack of a pathologic gold standard. Based on a review of 17,340 Medicare patients hospitalized for community-acquired pneumonia, microbial confirmation is often of little assistance, and a probable pathogen is identified in only 7.6% of cases.6

RATES OF OUTPATIENT DIAGNOSIS ARE LIKELY SIMILAR

Thus far, we have examined trends in inpatient diagnosis but not those of outpatient diagnosis. There is no well-done observational study that documents outpatient trends, but there is little reason to suppose the trends are different. Risk-scoring systems in pneumonia, such as the PORT7 and the CURB-65,8 have been designed to decrease unnecessary inpatient admissions, but they do not lend clarity to the diagnosis itself.

The central problem with pneumonia, as with many long-recognized clinical conditions, is that the diagnosis is separated from the treatment. In other words, although physicians are confident that antibiotics benefit patients who have what Sir William Osler would have called pneumonia (elevated white blood cell count, fever, cough, dyspnea, pleurisy, egophany, lobular infiltrate), we don’t know whether the treatment benefits patients whose pneumonia would have been unrecognizable decades ago (with cough, low-grade fever, and infiltrate on CT alone). Improvements in imaging may exacerbate the problem. In this sense, pneumonia exists on a spectrum, as do many medical diagnoses. Not all cases are equally severe, and some may not deserve to be labeled as pneumonia.

No randomized trial has compared antibiotics against supportive care in pneumonia, and, likely, no such trial is needed for clear cases. However, with the growing number of soft diagnoses, randomized trials are desperately needed to delineate where harms outweigh benefits, and where the fuzzy edge of the pneumonia diagnosis must end. And as is always the case with studies that challenge a standard of care, null results should prompt further trials.

WELL-DESIGNED TRIALS COULD END THE UNCERTAINTY

In the next few years, clinical trials, rationally planned, may end most of the uncertainty regarding pneumonia.

Existing observational data may be used to identify groups of patients who, in today’s world, are diagnosed with pneumonia but who do exceptionally well (eg, younger patients with fewer comorbidities, who present with low-grade fever but no signs of consolidation on physical examination, and with dubious results on chest radiography). These are patients for whom equipoise exists, and randomized trials should compare a strategy of antibiotics with a strategy of best supportive care. Trials should be powered for patient-centered outcomes, such as the duration and the complications of illness. The death rate should be scrupulously recorded.

Patients whose pneumonia would have been unrecognizable decades ago should be another target population for the trials I propose.

In a short time, pneumonia may become synonymous with a set of factors for lung infection that predict who will benefit from antibiotics, and who can be safely followed. Already, we are moving toward this standard in other diseases.9 For pulmonary embolism, ongoing trials are testing if anticoagulation can be safely omitted in patients with subsegmental clots (clinicaltrials.gov identifier NCT01455818). Such trials are, at last, translating old diagnoses into the language of evidence-based medicine.

For patients with pneumonia who are not hospitalized, the current outpatient therapy is based on data from studies that show a low rate of failure with empiric treatment based on consideration of the common pathogens for this condition, with few patients subsequently requiring hospitalization. Today, this reasoning is inadequate. The basis for any therapy must be proven benefit for patients with a defined condition compared with a lesser strategy. Data already demonstrate that a short course of antibiotics is no worse than a long course for many hospitalized and outpatients with pneumonia,10,11 but many other patients may require no treatment at all. The time has come to find out.

References
  1. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med 2008; 168:351356.
  2. Marrie TJ, Huang JQ. Low-risk patients admitted with community-acquired pneumonia. Am J Med 2005; 118:13571363.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ. Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988–2002. JAMA 2005; 294:27122719.
  4. Trotter CL, Stuart JM, George R, Miller E. Increasing hospital admissions for pneumonia, England. Emerg Infect Dis 2008; 14:727733.
  5. Lindenauer PK, Lagu T, Shieh MS, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003–2009. JAMA 2012; 307:14051413.
  6. Bartlett JG. Diagnostic tests for agents of community-acquired pneumonia. Clin Infect Dis 2011; 52(suppl 4):S296S304.
  7. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med 1997; 336:242250.
  8. Lim W, van der Eerden MM, Laing R, et al. Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study. Thorax 2003; 58:377382.
  9. Prasad V, Rho J, Cifu A. The diagnosis and treatment of pulmonary embolism: a metaphor for medicine in the evidence-based medicine era. Arch Intern Med 2012; 172:955958.
  10. Singh N, Rogers P, Atwood CW, Wagener MM, Yu VL. Short-course empiric antibiotic therapy for patients with pulmonary infiltrates in the intensive care unit. A proposed solution for indiscriminate antibiotic prescription. Am J Respir Crit Care Med 2000; 162:505511.
  11. Li JZ, Winston LG, Moore DH, Bent S. Efficacy of short-course antibiotic regimens for community-acquired pneumonia: a meta-analysis. Am J Med 2007; 120:783790.
References
  1. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med 2008; 168:351356.
  2. Marrie TJ, Huang JQ. Low-risk patients admitted with community-acquired pneumonia. Am J Med 2005; 118:13571363.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ. Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988–2002. JAMA 2005; 294:27122719.
  4. Trotter CL, Stuart JM, George R, Miller E. Increasing hospital admissions for pneumonia, England. Emerg Infect Dis 2008; 14:727733.
  5. Lindenauer PK, Lagu T, Shieh MS, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003–2009. JAMA 2012; 307:14051413.
  6. Bartlett JG. Diagnostic tests for agents of community-acquired pneumonia. Clin Infect Dis 2011; 52(suppl 4):S296S304.
  7. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med 1997; 336:242250.
  8. Lim W, van der Eerden MM, Laing R, et al. Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study. Thorax 2003; 58:377382.
  9. Prasad V, Rho J, Cifu A. The diagnosis and treatment of pulmonary embolism: a metaphor for medicine in the evidence-based medicine era. Arch Intern Med 2012; 172:955958.
  10. Singh N, Rogers P, Atwood CW, Wagener MM, Yu VL. Short-course empiric antibiotic therapy for patients with pulmonary infiltrates in the intensive care unit. A proposed solution for indiscriminate antibiotic prescription. Am J Respir Crit Care Med 2000; 162:505511.
  11. Li JZ, Winston LG, Moore DH, Bent S. Efficacy of short-course antibiotic regimens for community-acquired pneumonia: a meta-analysis. Am J Med 2007; 120:783790.
Issue
Cleveland Clinic Journal of Medicine - 80(10)
Issue
Cleveland Clinic Journal of Medicine - 80(10)
Page Number
616-618
Page Number
616-618
Publications
Publications
Topics
Article Type
Display Headline
The overdiagnosis of pneumonia
Display Headline
The overdiagnosis of pneumonia
Sections
Disallow All Ads
Alternative CME
Article PDF Media

The conundrum of cost-effectiveness

Article Type
Changed
Thu, 03/28/2019 - 16:14
Display Headline
The conundrum of cost-effectiveness

Drs. Udeh and Udeh attempt to highlight the “straw man” nature of my argument and the inaccuracies of my piece, but they ultimately disprove none of my claims.

Regarding vertebroplasty—a procedure that never worked better than a sham one—the authors do not fault the cost-effectiveness analysis for getting it wrong, but rather early clinical studies that provided false confidence. Yet, as a matter of fact, both were wrong. Cost-effectiveness analyses cannot be excused because they are based on faulty assumptions or poor data. This is precisely the reason they should be faulted. If incorrect cost-effectiveness analyses cannot be blamed because clinical data are flawed, can incorrect clinical research blame its shortcomings on promising preclinical data?

Cost-effectiveness analyses continue to be published regarding interventions that lack even a single randomized controlled trial showing efficacy, despite the authors’ assertion that no one would do that. Favorable cost profiles have been found for diverse, unproven interventions such as transarterial chemoembolization,1 surgical laminectomy,2 and rosiglitazone (Avandia).3 Udeh and Udeh hold an untenable position, arguing that such analyses are ridiculous and would not be performed (such as a study of antibiotics to treat the common cold), while dismissing counterexamples (vertebroplasty), contending they are moot. The fact is that flawed cost-effectiveness studies are performed. They are often in error, and they distort our discussions of funding and approval.

Regarding exemastane (Aromasin), the authors miss the distinction between disease-specific death and overall mortality. Often, therapies lower the death rate from a particular disease but do not increase the overall survival rate. Typically, in these situations, we attribute the discrepancy to a lack of power, but an alternative hypothesis is that some death rates (eg, from cancer) decrease, while others (eg, from cardiovascular disease) increase, resulting in no net benefit. My comment regarding primary prevention studies is that unless the overall mortality rate is improved, one may continue to wonder if this phenomenon—trading death—is occurring. As a result, cost-effective analyses performed on these data may reach false conclusions. The authors’ fatalistic interpretation of my comments is not what I intended and is much more like a straw man.

Lastly, some of the difficulties in reconciling costs from randomized trials and actual clinical practice would be improved if clinical trials included participants who were more like the patients who would ultimately use the therapy. Such pragmatic trials would be a boon to the validity of research science4 and the accuracy of cost-effectiveness studies. I doubt that decision analytic modeling alone can overcome the problems I highlight. Two decades ago, we learned—from cost-effectiveness studies of autologous bone marrow transplantation in breast cancer—that decision analysis could not overcome major deficits in evidence.5 Autologous bone marrow transplantation is cost-effective—well, assuming it works.

We need cost-effectiveness studies to help us prioritize among countless emerging medical practices. However, we also need those analyses to be accurate. The examples I highlighted show common ways we err. The two rules I propose in my original commentary6 are not obvious to all, and they continue to be ignored. As such, cost-effectiveness still resembles like apples and oranges.

References
  1. Whitney R, Vàlek V, Fages JF, et al. Transarterial chemoembolization and selective internal radiation for the treatment of patients with metastatic neuroendocrine tumors: a comparison of efficacy and cost. Oncologist 2011; 16:594601.
  2. Burnett MG, Stein SC, Bartels RH. Cost-effectiveness of current treatment strategies for lumbar spinal stenosis: nonsurgical care, laminectomy, and X-STOP. J Neurosurg Spine 2010; 13:3946.
  3. Beale S, Bagust A, Shearer AT, Martin A, Hulme L. Cost-effectiveness of rosiglitazone combination therapy for the treatment of type 2 diabetes mellitus in the UK. Pharmacoeconomics 2006; 24(suppl 1):2134.
  4. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:3738.
  5. Hillner BE, Smith TJ, Desch CE. Efficacy and cost-effectiveness of autologous bone marrow transplantation in metastatic breast cancer. Estimates using decision analysis while awaiting clinical trial results. JAMA 1992; 267:20552061.
  6. Prasad V. The apples and oranges of cost-effectiveness. Cleve Clin J Med 2012; 79:377379.
Article PDF
Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Drive, 10/12N226, Bethesda, MD 20892; e-mail [email protected]

The views and opinions of Dr. Prasad do not necessarily reflect those of the National Cancer Institute or National Institutes of Health.

Issue
Cleveland Clinic Journal of Medicine - 79(9)
Publications
Topics
Page Number
610-611
Sections
Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Drive, 10/12N226, Bethesda, MD 20892; e-mail [email protected]

The views and opinions of Dr. Prasad do not necessarily reflect those of the National Cancer Institute or National Institutes of Health.

Author and Disclosure Information

Vinay Prasad, MD
Medical Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD

Address: Vinay Prasad, MD, Medical Oncology Branch, National Cancer Institute, National Institutes of Health, 10 Center Drive, 10/12N226, Bethesda, MD 20892; e-mail [email protected]

The views and opinions of Dr. Prasad do not necessarily reflect those of the National Cancer Institute or National Institutes of Health.

Article PDF
Article PDF
Related Articles

Drs. Udeh and Udeh attempt to highlight the “straw man” nature of my argument and the inaccuracies of my piece, but they ultimately disprove none of my claims.

Regarding vertebroplasty—a procedure that never worked better than a sham one—the authors do not fault the cost-effectiveness analysis for getting it wrong, but rather early clinical studies that provided false confidence. Yet, as a matter of fact, both were wrong. Cost-effectiveness analyses cannot be excused because they are based on faulty assumptions or poor data. This is precisely the reason they should be faulted. If incorrect cost-effectiveness analyses cannot be blamed because clinical data are flawed, can incorrect clinical research blame its shortcomings on promising preclinical data?

Cost-effectiveness analyses continue to be published regarding interventions that lack even a single randomized controlled trial showing efficacy, despite the authors’ assertion that no one would do that. Favorable cost profiles have been found for diverse, unproven interventions such as transarterial chemoembolization,1 surgical laminectomy,2 and rosiglitazone (Avandia).3 Udeh and Udeh hold an untenable position, arguing that such analyses are ridiculous and would not be performed (such as a study of antibiotics to treat the common cold), while dismissing counterexamples (vertebroplasty), contending they are moot. The fact is that flawed cost-effectiveness studies are performed. They are often in error, and they distort our discussions of funding and approval.

Regarding exemastane (Aromasin), the authors miss the distinction between disease-specific death and overall mortality. Often, therapies lower the death rate from a particular disease but do not increase the overall survival rate. Typically, in these situations, we attribute the discrepancy to a lack of power, but an alternative hypothesis is that some death rates (eg, from cancer) decrease, while others (eg, from cardiovascular disease) increase, resulting in no net benefit. My comment regarding primary prevention studies is that unless the overall mortality rate is improved, one may continue to wonder if this phenomenon—trading death—is occurring. As a result, cost-effective analyses performed on these data may reach false conclusions. The authors’ fatalistic interpretation of my comments is not what I intended and is much more like a straw man.

Lastly, some of the difficulties in reconciling costs from randomized trials and actual clinical practice would be improved if clinical trials included participants who were more like the patients who would ultimately use the therapy. Such pragmatic trials would be a boon to the validity of research science4 and the accuracy of cost-effectiveness studies. I doubt that decision analytic modeling alone can overcome the problems I highlight. Two decades ago, we learned—from cost-effectiveness studies of autologous bone marrow transplantation in breast cancer—that decision analysis could not overcome major deficits in evidence.5 Autologous bone marrow transplantation is cost-effective—well, assuming it works.

We need cost-effectiveness studies to help us prioritize among countless emerging medical practices. However, we also need those analyses to be accurate. The examples I highlighted show common ways we err. The two rules I propose in my original commentary6 are not obvious to all, and they continue to be ignored. As such, cost-effectiveness still resembles like apples and oranges.

Drs. Udeh and Udeh attempt to highlight the “straw man” nature of my argument and the inaccuracies of my piece, but they ultimately disprove none of my claims.

Regarding vertebroplasty—a procedure that never worked better than a sham one—the authors do not fault the cost-effectiveness analysis for getting it wrong, but rather early clinical studies that provided false confidence. Yet, as a matter of fact, both were wrong. Cost-effectiveness analyses cannot be excused because they are based on faulty assumptions or poor data. This is precisely the reason they should be faulted. If incorrect cost-effectiveness analyses cannot be blamed because clinical data are flawed, can incorrect clinical research blame its shortcomings on promising preclinical data?

Cost-effectiveness analyses continue to be published regarding interventions that lack even a single randomized controlled trial showing efficacy, despite the authors’ assertion that no one would do that. Favorable cost profiles have been found for diverse, unproven interventions such as transarterial chemoembolization,1 surgical laminectomy,2 and rosiglitazone (Avandia).3 Udeh and Udeh hold an untenable position, arguing that such analyses are ridiculous and would not be performed (such as a study of antibiotics to treat the common cold), while dismissing counterexamples (vertebroplasty), contending they are moot. The fact is that flawed cost-effectiveness studies are performed. They are often in error, and they distort our discussions of funding and approval.

Regarding exemastane (Aromasin), the authors miss the distinction between disease-specific death and overall mortality. Often, therapies lower the death rate from a particular disease but do not increase the overall survival rate. Typically, in these situations, we attribute the discrepancy to a lack of power, but an alternative hypothesis is that some death rates (eg, from cancer) decrease, while others (eg, from cardiovascular disease) increase, resulting in no net benefit. My comment regarding primary prevention studies is that unless the overall mortality rate is improved, one may continue to wonder if this phenomenon—trading death—is occurring. As a result, cost-effective analyses performed on these data may reach false conclusions. The authors’ fatalistic interpretation of my comments is not what I intended and is much more like a straw man.

Lastly, some of the difficulties in reconciling costs from randomized trials and actual clinical practice would be improved if clinical trials included participants who were more like the patients who would ultimately use the therapy. Such pragmatic trials would be a boon to the validity of research science4 and the accuracy of cost-effectiveness studies. I doubt that decision analytic modeling alone can overcome the problems I highlight. Two decades ago, we learned—from cost-effectiveness studies of autologous bone marrow transplantation in breast cancer—that decision analysis could not overcome major deficits in evidence.5 Autologous bone marrow transplantation is cost-effective—well, assuming it works.

We need cost-effectiveness studies to help us prioritize among countless emerging medical practices. However, we also need those analyses to be accurate. The examples I highlighted show common ways we err. The two rules I propose in my original commentary6 are not obvious to all, and they continue to be ignored. As such, cost-effectiveness still resembles like apples and oranges.

References
  1. Whitney R, Vàlek V, Fages JF, et al. Transarterial chemoembolization and selective internal radiation for the treatment of patients with metastatic neuroendocrine tumors: a comparison of efficacy and cost. Oncologist 2011; 16:594601.
  2. Burnett MG, Stein SC, Bartels RH. Cost-effectiveness of current treatment strategies for lumbar spinal stenosis: nonsurgical care, laminectomy, and X-STOP. J Neurosurg Spine 2010; 13:3946.
  3. Beale S, Bagust A, Shearer AT, Martin A, Hulme L. Cost-effectiveness of rosiglitazone combination therapy for the treatment of type 2 diabetes mellitus in the UK. Pharmacoeconomics 2006; 24(suppl 1):2134.
  4. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:3738.
  5. Hillner BE, Smith TJ, Desch CE. Efficacy and cost-effectiveness of autologous bone marrow transplantation in metastatic breast cancer. Estimates using decision analysis while awaiting clinical trial results. JAMA 1992; 267:20552061.
  6. Prasad V. The apples and oranges of cost-effectiveness. Cleve Clin J Med 2012; 79:377379.
References
  1. Whitney R, Vàlek V, Fages JF, et al. Transarterial chemoembolization and selective internal radiation for the treatment of patients with metastatic neuroendocrine tumors: a comparison of efficacy and cost. Oncologist 2011; 16:594601.
  2. Burnett MG, Stein SC, Bartels RH. Cost-effectiveness of current treatment strategies for lumbar spinal stenosis: nonsurgical care, laminectomy, and X-STOP. J Neurosurg Spine 2010; 13:3946.
  3. Beale S, Bagust A, Shearer AT, Martin A, Hulme L. Cost-effectiveness of rosiglitazone combination therapy for the treatment of type 2 diabetes mellitus in the UK. Pharmacoeconomics 2006; 24(suppl 1):2134.
  4. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:3738.
  5. Hillner BE, Smith TJ, Desch CE. Efficacy and cost-effectiveness of autologous bone marrow transplantation in metastatic breast cancer. Estimates using decision analysis while awaiting clinical trial results. JAMA 1992; 267:20552061.
  6. Prasad V. The apples and oranges of cost-effectiveness. Cleve Clin J Med 2012; 79:377379.
Issue
Cleveland Clinic Journal of Medicine - 79(9)
Issue
Cleveland Clinic Journal of Medicine - 79(9)
Page Number
610-611
Page Number
610-611
Publications
Publications
Topics
Article Type
Display Headline
The conundrum of cost-effectiveness
Display Headline
The conundrum of cost-effectiveness
Sections
Disallow All Ads
Alternative CME
Article PDF Media

The apples and oranges of cost-effectiveness

Article Type
Changed
Thu, 03/28/2019 - 16:18
Display Headline
The apples and oranges of cost-effectiveness

Measures of cost-effectiveness are used to compare the merits of diverse medical interventions. A novel drug for metastatic melanoma, for instance, can be compared with statin therapy for primary prevention of cardiovascular events, which in turn can be compared against a surgical procedure for pain, as all are described by a single number: dollars per life-year (or quality-adjusted life-year) gained. Presumably, this number tells practitioners and payers which interventions provide the most benefit for every dollar spent.

However, too often, studies of cost-effectiveness differ from one another. They can be based on data from different types of studies, such as randomized controlled trials, surveys of large payer databases, or single-center chart reviews. The comparison treatments may differ. And the treatments may be of unproven efficacy. In these cases, although the results are all expressed in dollars per life-year, we are comparing apples and oranges.

In the following discussion, I use three key contemporary examples to demonstrate problems central to cost-effectiveness analysis. Together, these examples show that cost-effectiveness, arguably our best tool for comparing apples and oranges, is a lot like apples and oranges itself. I conclude by proposing some solutions.

PROBLEMS WITH COST-EFFECTIVENESS: THREE EXAMPLES

Studies of three therapies highlight the dilemma of cost-effectiveness.

Example 1: Vertebroplasty

Studies of vertebroplasty, a treatment for osteoporotic vertebral fractures that involves injecting polymethylmethacrylate cement into the fractured bone, show the perils of calculating the cost-effectiveness of unproven therapies.

Vertebroplasty gained prominence during the first decade of the 2000s, but in 2009 it was found to be no better than a sham procedure.1,2

In 2008, one study reported that vertebroplasty was cheaper than medical management at 12 months and, thus, cost-effective.3 While this finding was certainly true for the regimen of medical management the authors examined, and while it may very well be true for other protocols for medical management, the finding obscures the fact that a sham procedure would be more cost-effective than either vertebroplasty or medical therapy—an unsettling conclusion.

Example 2: Exemestane

Another dilemma occurs when we can calculate cost-effectiveness for a particular outcome only.

Studies of exemestane (Aromasin), an aromatase inhibitor given to prevent breast cancer, show the difficulty. Recently, exemestane was shown to decrease the rate of breast cancer when used as primary prevention in postmenopausal women.4 What is the costeffectiveness of this therapy?

While we can calculate the dollars per invasive breast cancer averted, we cannot accurately calculate the dollars per life-year gained, as the trial’s end point was not the mortality rate. We can assume that the breast cancer deaths avoided are not negated by deaths incurred through other causes, but this may or may not prove true. Fibrates, for instance, may reduce the rate of cardiovascular death but increase deaths from noncardiac causes, providing no net benefit.5 Such long-term effects remain unknown in the breast cancer study.

Example 3: COX-2 inhibitors

Estimates of cost-effectiveness derived from randomized trials can differ from those derived from real-world studies. Studies of cyclooxygenase 2 (COX-2) inhibitors, which were touted as causing less gastrointestinal bleeding than other nonsteroidal anti-inflammatory drugs, show that cost-effectiveness analyses performed from randomized trials may not mirror dollars spent in real-world practice.

Estimates from randomized controlled trials indicate that a COX-2 inhibitor such as celecoxib (Celebrex) costs $20,000 to prevent one gastrointestinal hemorrhage. However, when calculated using real-world data, that number rises to over $100,000.6

TWO PROPOSED RULES FOR COST-EFFECTIVENESS ANALYSES

How do we reconcile these and related puzzles of cost-effectiveness? First, we should agree on what type of “cost-effectiveness” we are interested in. Most often, we want to know whether the real-world use of a therapy is financially rational. Thus, we are concerned with the effectiveness of therapies and not merely their efficacy in idealized clinical trials.

Furthermore, while real-world cost-effectiveness may change over time, particularly as pricing and delivery vary, we want some assurance that the therapy is truly better than placebo. Therefore, we should only calculate the cost-effectiveness of therapies that have previously demonstrated efficacy in properly controlled, randomized studies.7

To correct the deficiencies noted here, I propose two rules:

  • Cost-effectiveness should be calculated only for therapies that have been proven to work, and
  • These calculations should be done from the best available real-world data.

When both these conditions are met—ie, a therapy has proven efficacy, and we have data from its real-world use—cost-effectiveness analysis provides useful information for payers and practitioners. Then, indeed, a novel anticancer agent costing $30,000 per life-year gained can be compared against primary prevention with statin therapy in patients at elevated cardiovascular risk costing $20,000 per life-year gained.

 

 

CAN PREVENTION BE COMPARED WITH TREATMENT?

This leaves us with the final and most difficult question. Is it right to compare such things?

Having terminal cancer is a different experience than having high cholesterol, and this is the last apple and orange of cost-effectiveness. While a strict utilitarian view of medicine might find these cases indistinguishable, most practitioners and payers are not strict utilitarians. As a society, we tend to favor paying more to treat someone who is ill than paying an equivalent amount to prevent illness. Often, such a stance is criticized as a failure to invest in prevention and primary care, but another explanation is that the bias is a fundamental one of human risk-taking.

Cost-effectiveness is, to a certain degree, a slippery concept, and it is more likely to be “off” when a therapy is given broadly (to hundreds of thousands of people as opposed to hundreds) and taken in a decentralized fashion by individual patients (as opposed to directly observed therapy in an infusion suite). Accordingly, we may favor more expensive therapies, the cost-effectiveness of which can be estimated more precisely.

A recent meta-analysis of statins for primary prevention in high-risk patients found that they were not associated with improvement in the overall rate of death.8 Such a finding dramatically alters our impression of their cost-effectiveness and may explain the bias against investing in such therapies in the first place.

IMPROVING COST-EFFECTIVENESS RESEARCH

Studies of cost-effectiveness are not equivalent. Currently, such studies are apples and oranges, making difficult the very comparison that cost-effectiveness should facilitate. Knowing that a therapy is efficacious should be prerequisite to cost-effectiveness calculations, as should performing calculations under real-world conditions.

Regarding efficacy, it is inappropriate to calculate cost-effectiveness from trials that use only surrogate end points, or those that are improperly controlled.

For example, adding extended-release niacin to statin therapy may raise high-density lipoprotein cholesterol levels by 25%. Such an increase is, in turn, expected to confer a certain reduction in cardiovascular events and death. Thus, the cost-effectiveness of niacin might be calculated as $20,000 per life-year saved. However, adding extended-release niacin to statin therapy does not improve hard outcomes when directly measured,9 and the therapy is not efficacious at all. Its true “dollars per life-year saved” approaches infinity.

Studies that use historical controls, are observational, and are performed at single centers may also mislead us regarding a therapy’s efficacy. Tight glycemic control in intensive care patients initially seemed promising10,11 and cost-effective.12 However, several years later it was found to increase the mortality rate.13

“Real world” means that the best measures of cost-effectiveness will calculate the cost per life saved that the therapy achieves in clinical practice. Adherence to COX-2 inhibitors may not be as strict in the real world as it is in the carefully selected participants in randomized controlled trials, and, thus, the true costs may be higher. A drug that prevents breast cancer may have countervailing effects that may as yet be unknown, or compliance with it may wane over years. Thus, the most accurate measures of cost-effectiveness will examine therapies as best as they can function in typical practice and likely be derived from data sets of large payers or providers.

Finally, it remains an open and contentious issue whether the cost-effectiveness of primary prevention and the cost-effectiveness of treatment are comparable at all. We must continue to ponder and debate this philosophical question.

Certainly, these are the challenges of cost-effectiveness. Equally certain is that—with renewed consideration of the goals of such research, with stricter standards for future studies, and in an economic and political climate unable to sustain the status quo—the challenges must be surmounted.

References
  1. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361:569579.
  2. Buchbinder R, Osborne RH, Ebeling PR, et al. A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures. N Engl J Med 2009; 361:557568.
  3. Masala S, Ciarrapico AM, Konda D, Vinicola V, Mammucari M, Simonetti G. Cost-effectiveness of percutaneous vertebroplasty in osteoporotic vertebral fractures. Eur Spine J 2008; 17:12421250.
  4. Goss PE, Ingle JN, Alés-Martínez JE, et al; NCIC CTG MAP3 Study Investigators. Exemestane for breast-cancer prevention in postmenopausal women. N Engl J Med 2011; 364:23812391.
  5. Studer M, Briel M, Leimenstoll B, Glass TR, Bucher HC. Effect of different antilipidemic agents and diets on mortality: a systematic review. Arch Intern Med 2005; 165:725730.
  6. van Staa TP, Leufkens HG, Zhang B, Smeeth L. A comparison of cost effectiveness using data from randomized trials or actual clinical practice: selective cox-2 inhibitors as an example. PLoS Med 2009; 6:e1000194.
  7. Prasad V, Cifu A. A medical burden of proof: towards a new ethic. BioSocieties 2012; 7:7287.
  8. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:10241031.
  9. National Heart, Lung, and Blood Institute National Institutes of Health. AIM-HIGH: Blinded treatment phase of study stopped. http://www.aimhigh-heart.com/. Accessed January 31, 2012.
  10. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001; 345:13591367.
  11. Van den Berghe G, Wilmer A, Hermans G, et al. Intensive insulin therapy in the medical ICU. N Engl J Med 2006; 354:449461.
  12. Mesotten D, Van den Berghe G. Clinical potential of insulin therapy in critically ill patients. Drugs 2003; 63:625636.
  13. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
Article PDF
Author and Disclosure Information

Vinay Prasad, MD
Department of Medicine, Northwestern University, Chicago, IL

Address: Vinay Prasad, MD, Deparment of Medicine, Northwestern University, 333 E. Ontario, Suite 901B, Chicago, IL 60611; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 79(6)
Publications
Topics
Page Number
377-379
Sections
Author and Disclosure Information

Vinay Prasad, MD
Department of Medicine, Northwestern University, Chicago, IL

Address: Vinay Prasad, MD, Deparment of Medicine, Northwestern University, 333 E. Ontario, Suite 901B, Chicago, IL 60611; e-mail [email protected]

Author and Disclosure Information

Vinay Prasad, MD
Department of Medicine, Northwestern University, Chicago, IL

Address: Vinay Prasad, MD, Deparment of Medicine, Northwestern University, 333 E. Ontario, Suite 901B, Chicago, IL 60611; e-mail [email protected]

Article PDF
Article PDF

Measures of cost-effectiveness are used to compare the merits of diverse medical interventions. A novel drug for metastatic melanoma, for instance, can be compared with statin therapy for primary prevention of cardiovascular events, which in turn can be compared against a surgical procedure for pain, as all are described by a single number: dollars per life-year (or quality-adjusted life-year) gained. Presumably, this number tells practitioners and payers which interventions provide the most benefit for every dollar spent.

However, too often, studies of cost-effectiveness differ from one another. They can be based on data from different types of studies, such as randomized controlled trials, surveys of large payer databases, or single-center chart reviews. The comparison treatments may differ. And the treatments may be of unproven efficacy. In these cases, although the results are all expressed in dollars per life-year, we are comparing apples and oranges.

In the following discussion, I use three key contemporary examples to demonstrate problems central to cost-effectiveness analysis. Together, these examples show that cost-effectiveness, arguably our best tool for comparing apples and oranges, is a lot like apples and oranges itself. I conclude by proposing some solutions.

PROBLEMS WITH COST-EFFECTIVENESS: THREE EXAMPLES

Studies of three therapies highlight the dilemma of cost-effectiveness.

Example 1: Vertebroplasty

Studies of vertebroplasty, a treatment for osteoporotic vertebral fractures that involves injecting polymethylmethacrylate cement into the fractured bone, show the perils of calculating the cost-effectiveness of unproven therapies.

Vertebroplasty gained prominence during the first decade of the 2000s, but in 2009 it was found to be no better than a sham procedure.1,2

In 2008, one study reported that vertebroplasty was cheaper than medical management at 12 months and, thus, cost-effective.3 While this finding was certainly true for the regimen of medical management the authors examined, and while it may very well be true for other protocols for medical management, the finding obscures the fact that a sham procedure would be more cost-effective than either vertebroplasty or medical therapy—an unsettling conclusion.

Example 2: Exemestane

Another dilemma occurs when we can calculate cost-effectiveness for a particular outcome only.

Studies of exemestane (Aromasin), an aromatase inhibitor given to prevent breast cancer, show the difficulty. Recently, exemestane was shown to decrease the rate of breast cancer when used as primary prevention in postmenopausal women.4 What is the costeffectiveness of this therapy?

While we can calculate the dollars per invasive breast cancer averted, we cannot accurately calculate the dollars per life-year gained, as the trial’s end point was not the mortality rate. We can assume that the breast cancer deaths avoided are not negated by deaths incurred through other causes, but this may or may not prove true. Fibrates, for instance, may reduce the rate of cardiovascular death but increase deaths from noncardiac causes, providing no net benefit.5 Such long-term effects remain unknown in the breast cancer study.

Example 3: COX-2 inhibitors

Estimates of cost-effectiveness derived from randomized trials can differ from those derived from real-world studies. Studies of cyclooxygenase 2 (COX-2) inhibitors, which were touted as causing less gastrointestinal bleeding than other nonsteroidal anti-inflammatory drugs, show that cost-effectiveness analyses performed from randomized trials may not mirror dollars spent in real-world practice.

Estimates from randomized controlled trials indicate that a COX-2 inhibitor such as celecoxib (Celebrex) costs $20,000 to prevent one gastrointestinal hemorrhage. However, when calculated using real-world data, that number rises to over $100,000.6

TWO PROPOSED RULES FOR COST-EFFECTIVENESS ANALYSES

How do we reconcile these and related puzzles of cost-effectiveness? First, we should agree on what type of “cost-effectiveness” we are interested in. Most often, we want to know whether the real-world use of a therapy is financially rational. Thus, we are concerned with the effectiveness of therapies and not merely their efficacy in idealized clinical trials.

Furthermore, while real-world cost-effectiveness may change over time, particularly as pricing and delivery vary, we want some assurance that the therapy is truly better than placebo. Therefore, we should only calculate the cost-effectiveness of therapies that have previously demonstrated efficacy in properly controlled, randomized studies.7

To correct the deficiencies noted here, I propose two rules:

  • Cost-effectiveness should be calculated only for therapies that have been proven to work, and
  • These calculations should be done from the best available real-world data.

When both these conditions are met—ie, a therapy has proven efficacy, and we have data from its real-world use—cost-effectiveness analysis provides useful information for payers and practitioners. Then, indeed, a novel anticancer agent costing $30,000 per life-year gained can be compared against primary prevention with statin therapy in patients at elevated cardiovascular risk costing $20,000 per life-year gained.

 

 

CAN PREVENTION BE COMPARED WITH TREATMENT?

This leaves us with the final and most difficult question. Is it right to compare such things?

Having terminal cancer is a different experience than having high cholesterol, and this is the last apple and orange of cost-effectiveness. While a strict utilitarian view of medicine might find these cases indistinguishable, most practitioners and payers are not strict utilitarians. As a society, we tend to favor paying more to treat someone who is ill than paying an equivalent amount to prevent illness. Often, such a stance is criticized as a failure to invest in prevention and primary care, but another explanation is that the bias is a fundamental one of human risk-taking.

Cost-effectiveness is, to a certain degree, a slippery concept, and it is more likely to be “off” when a therapy is given broadly (to hundreds of thousands of people as opposed to hundreds) and taken in a decentralized fashion by individual patients (as opposed to directly observed therapy in an infusion suite). Accordingly, we may favor more expensive therapies, the cost-effectiveness of which can be estimated more precisely.

A recent meta-analysis of statins for primary prevention in high-risk patients found that they were not associated with improvement in the overall rate of death.8 Such a finding dramatically alters our impression of their cost-effectiveness and may explain the bias against investing in such therapies in the first place.

IMPROVING COST-EFFECTIVENESS RESEARCH

Studies of cost-effectiveness are not equivalent. Currently, such studies are apples and oranges, making difficult the very comparison that cost-effectiveness should facilitate. Knowing that a therapy is efficacious should be prerequisite to cost-effectiveness calculations, as should performing calculations under real-world conditions.

Regarding efficacy, it is inappropriate to calculate cost-effectiveness from trials that use only surrogate end points, or those that are improperly controlled.

For example, adding extended-release niacin to statin therapy may raise high-density lipoprotein cholesterol levels by 25%. Such an increase is, in turn, expected to confer a certain reduction in cardiovascular events and death. Thus, the cost-effectiveness of niacin might be calculated as $20,000 per life-year saved. However, adding extended-release niacin to statin therapy does not improve hard outcomes when directly measured,9 and the therapy is not efficacious at all. Its true “dollars per life-year saved” approaches infinity.

Studies that use historical controls, are observational, and are performed at single centers may also mislead us regarding a therapy’s efficacy. Tight glycemic control in intensive care patients initially seemed promising10,11 and cost-effective.12 However, several years later it was found to increase the mortality rate.13

“Real world” means that the best measures of cost-effectiveness will calculate the cost per life saved that the therapy achieves in clinical practice. Adherence to COX-2 inhibitors may not be as strict in the real world as it is in the carefully selected participants in randomized controlled trials, and, thus, the true costs may be higher. A drug that prevents breast cancer may have countervailing effects that may as yet be unknown, or compliance with it may wane over years. Thus, the most accurate measures of cost-effectiveness will examine therapies as best as they can function in typical practice and likely be derived from data sets of large payers or providers.

Finally, it remains an open and contentious issue whether the cost-effectiveness of primary prevention and the cost-effectiveness of treatment are comparable at all. We must continue to ponder and debate this philosophical question.

Certainly, these are the challenges of cost-effectiveness. Equally certain is that—with renewed consideration of the goals of such research, with stricter standards for future studies, and in an economic and political climate unable to sustain the status quo—the challenges must be surmounted.

Measures of cost-effectiveness are used to compare the merits of diverse medical interventions. A novel drug for metastatic melanoma, for instance, can be compared with statin therapy for primary prevention of cardiovascular events, which in turn can be compared against a surgical procedure for pain, as all are described by a single number: dollars per life-year (or quality-adjusted life-year) gained. Presumably, this number tells practitioners and payers which interventions provide the most benefit for every dollar spent.

However, too often, studies of cost-effectiveness differ from one another. They can be based on data from different types of studies, such as randomized controlled trials, surveys of large payer databases, or single-center chart reviews. The comparison treatments may differ. And the treatments may be of unproven efficacy. In these cases, although the results are all expressed in dollars per life-year, we are comparing apples and oranges.

In the following discussion, I use three key contemporary examples to demonstrate problems central to cost-effectiveness analysis. Together, these examples show that cost-effectiveness, arguably our best tool for comparing apples and oranges, is a lot like apples and oranges itself. I conclude by proposing some solutions.

PROBLEMS WITH COST-EFFECTIVENESS: THREE EXAMPLES

Studies of three therapies highlight the dilemma of cost-effectiveness.

Example 1: Vertebroplasty

Studies of vertebroplasty, a treatment for osteoporotic vertebral fractures that involves injecting polymethylmethacrylate cement into the fractured bone, show the perils of calculating the cost-effectiveness of unproven therapies.

Vertebroplasty gained prominence during the first decade of the 2000s, but in 2009 it was found to be no better than a sham procedure.1,2

In 2008, one study reported that vertebroplasty was cheaper than medical management at 12 months and, thus, cost-effective.3 While this finding was certainly true for the regimen of medical management the authors examined, and while it may very well be true for other protocols for medical management, the finding obscures the fact that a sham procedure would be more cost-effective than either vertebroplasty or medical therapy—an unsettling conclusion.

Example 2: Exemestane

Another dilemma occurs when we can calculate cost-effectiveness for a particular outcome only.

Studies of exemestane (Aromasin), an aromatase inhibitor given to prevent breast cancer, show the difficulty. Recently, exemestane was shown to decrease the rate of breast cancer when used as primary prevention in postmenopausal women.4 What is the costeffectiveness of this therapy?

While we can calculate the dollars per invasive breast cancer averted, we cannot accurately calculate the dollars per life-year gained, as the trial’s end point was not the mortality rate. We can assume that the breast cancer deaths avoided are not negated by deaths incurred through other causes, but this may or may not prove true. Fibrates, for instance, may reduce the rate of cardiovascular death but increase deaths from noncardiac causes, providing no net benefit.5 Such long-term effects remain unknown in the breast cancer study.

Example 3: COX-2 inhibitors

Estimates of cost-effectiveness derived from randomized trials can differ from those derived from real-world studies. Studies of cyclooxygenase 2 (COX-2) inhibitors, which were touted as causing less gastrointestinal bleeding than other nonsteroidal anti-inflammatory drugs, show that cost-effectiveness analyses performed from randomized trials may not mirror dollars spent in real-world practice.

Estimates from randomized controlled trials indicate that a COX-2 inhibitor such as celecoxib (Celebrex) costs $20,000 to prevent one gastrointestinal hemorrhage. However, when calculated using real-world data, that number rises to over $100,000.6

TWO PROPOSED RULES FOR COST-EFFECTIVENESS ANALYSES

How do we reconcile these and related puzzles of cost-effectiveness? First, we should agree on what type of “cost-effectiveness” we are interested in. Most often, we want to know whether the real-world use of a therapy is financially rational. Thus, we are concerned with the effectiveness of therapies and not merely their efficacy in idealized clinical trials.

Furthermore, while real-world cost-effectiveness may change over time, particularly as pricing and delivery vary, we want some assurance that the therapy is truly better than placebo. Therefore, we should only calculate the cost-effectiveness of therapies that have previously demonstrated efficacy in properly controlled, randomized studies.7

To correct the deficiencies noted here, I propose two rules:

  • Cost-effectiveness should be calculated only for therapies that have been proven to work, and
  • These calculations should be done from the best available real-world data.

When both these conditions are met—ie, a therapy has proven efficacy, and we have data from its real-world use—cost-effectiveness analysis provides useful information for payers and practitioners. Then, indeed, a novel anticancer agent costing $30,000 per life-year gained can be compared against primary prevention with statin therapy in patients at elevated cardiovascular risk costing $20,000 per life-year gained.

 

 

CAN PREVENTION BE COMPARED WITH TREATMENT?

This leaves us with the final and most difficult question. Is it right to compare such things?

Having terminal cancer is a different experience than having high cholesterol, and this is the last apple and orange of cost-effectiveness. While a strict utilitarian view of medicine might find these cases indistinguishable, most practitioners and payers are not strict utilitarians. As a society, we tend to favor paying more to treat someone who is ill than paying an equivalent amount to prevent illness. Often, such a stance is criticized as a failure to invest in prevention and primary care, but another explanation is that the bias is a fundamental one of human risk-taking.

Cost-effectiveness is, to a certain degree, a slippery concept, and it is more likely to be “off” when a therapy is given broadly (to hundreds of thousands of people as opposed to hundreds) and taken in a decentralized fashion by individual patients (as opposed to directly observed therapy in an infusion suite). Accordingly, we may favor more expensive therapies, the cost-effectiveness of which can be estimated more precisely.

A recent meta-analysis of statins for primary prevention in high-risk patients found that they were not associated with improvement in the overall rate of death.8 Such a finding dramatically alters our impression of their cost-effectiveness and may explain the bias against investing in such therapies in the first place.

IMPROVING COST-EFFECTIVENESS RESEARCH

Studies of cost-effectiveness are not equivalent. Currently, such studies are apples and oranges, making difficult the very comparison that cost-effectiveness should facilitate. Knowing that a therapy is efficacious should be prerequisite to cost-effectiveness calculations, as should performing calculations under real-world conditions.

Regarding efficacy, it is inappropriate to calculate cost-effectiveness from trials that use only surrogate end points, or those that are improperly controlled.

For example, adding extended-release niacin to statin therapy may raise high-density lipoprotein cholesterol levels by 25%. Such an increase is, in turn, expected to confer a certain reduction in cardiovascular events and death. Thus, the cost-effectiveness of niacin might be calculated as $20,000 per life-year saved. However, adding extended-release niacin to statin therapy does not improve hard outcomes when directly measured,9 and the therapy is not efficacious at all. Its true “dollars per life-year saved” approaches infinity.

Studies that use historical controls, are observational, and are performed at single centers may also mislead us regarding a therapy’s efficacy. Tight glycemic control in intensive care patients initially seemed promising10,11 and cost-effective.12 However, several years later it was found to increase the mortality rate.13

“Real world” means that the best measures of cost-effectiveness will calculate the cost per life saved that the therapy achieves in clinical practice. Adherence to COX-2 inhibitors may not be as strict in the real world as it is in the carefully selected participants in randomized controlled trials, and, thus, the true costs may be higher. A drug that prevents breast cancer may have countervailing effects that may as yet be unknown, or compliance with it may wane over years. Thus, the most accurate measures of cost-effectiveness will examine therapies as best as they can function in typical practice and likely be derived from data sets of large payers or providers.

Finally, it remains an open and contentious issue whether the cost-effectiveness of primary prevention and the cost-effectiveness of treatment are comparable at all. We must continue to ponder and debate this philosophical question.

Certainly, these are the challenges of cost-effectiveness. Equally certain is that—with renewed consideration of the goals of such research, with stricter standards for future studies, and in an economic and political climate unable to sustain the status quo—the challenges must be surmounted.

References
  1. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361:569579.
  2. Buchbinder R, Osborne RH, Ebeling PR, et al. A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures. N Engl J Med 2009; 361:557568.
  3. Masala S, Ciarrapico AM, Konda D, Vinicola V, Mammucari M, Simonetti G. Cost-effectiveness of percutaneous vertebroplasty in osteoporotic vertebral fractures. Eur Spine J 2008; 17:12421250.
  4. Goss PE, Ingle JN, Alés-Martínez JE, et al; NCIC CTG MAP3 Study Investigators. Exemestane for breast-cancer prevention in postmenopausal women. N Engl J Med 2011; 364:23812391.
  5. Studer M, Briel M, Leimenstoll B, Glass TR, Bucher HC. Effect of different antilipidemic agents and diets on mortality: a systematic review. Arch Intern Med 2005; 165:725730.
  6. van Staa TP, Leufkens HG, Zhang B, Smeeth L. A comparison of cost effectiveness using data from randomized trials or actual clinical practice: selective cox-2 inhibitors as an example. PLoS Med 2009; 6:e1000194.
  7. Prasad V, Cifu A. A medical burden of proof: towards a new ethic. BioSocieties 2012; 7:7287.
  8. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:10241031.
  9. National Heart, Lung, and Blood Institute National Institutes of Health. AIM-HIGH: Blinded treatment phase of study stopped. http://www.aimhigh-heart.com/. Accessed January 31, 2012.
  10. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001; 345:13591367.
  11. Van den Berghe G, Wilmer A, Hermans G, et al. Intensive insulin therapy in the medical ICU. N Engl J Med 2006; 354:449461.
  12. Mesotten D, Van den Berghe G. Clinical potential of insulin therapy in critically ill patients. Drugs 2003; 63:625636.
  13. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
References
  1. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361:569579.
  2. Buchbinder R, Osborne RH, Ebeling PR, et al. A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures. N Engl J Med 2009; 361:557568.
  3. Masala S, Ciarrapico AM, Konda D, Vinicola V, Mammucari M, Simonetti G. Cost-effectiveness of percutaneous vertebroplasty in osteoporotic vertebral fractures. Eur Spine J 2008; 17:12421250.
  4. Goss PE, Ingle JN, Alés-Martínez JE, et al; NCIC CTG MAP3 Study Investigators. Exemestane for breast-cancer prevention in postmenopausal women. N Engl J Med 2011; 364:23812391.
  5. Studer M, Briel M, Leimenstoll B, Glass TR, Bucher HC. Effect of different antilipidemic agents and diets on mortality: a systematic review. Arch Intern Med 2005; 165:725730.
  6. van Staa TP, Leufkens HG, Zhang B, Smeeth L. A comparison of cost effectiveness using data from randomized trials or actual clinical practice: selective cox-2 inhibitors as an example. PLoS Med 2009; 6:e1000194.
  7. Prasad V, Cifu A. A medical burden of proof: towards a new ethic. BioSocieties 2012; 7:7287.
  8. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:10241031.
  9. National Heart, Lung, and Blood Institute National Institutes of Health. AIM-HIGH: Blinded treatment phase of study stopped. http://www.aimhigh-heart.com/. Accessed January 31, 2012.
  10. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001; 345:13591367.
  11. Van den Berghe G, Wilmer A, Hermans G, et al. Intensive insulin therapy in the medical ICU. N Engl J Med 2006; 354:449461.
  12. Mesotten D, Van den Berghe G. Clinical potential of insulin therapy in critically ill patients. Drugs 2003; 63:625636.
  13. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
Issue
Cleveland Clinic Journal of Medicine - 79(6)
Issue
Cleveland Clinic Journal of Medicine - 79(6)
Page Number
377-379
Page Number
377-379
Publications
Publications
Topics
Article Type
Display Headline
The apples and oranges of cost-effectiveness
Display Headline
The apples and oranges of cost-effectiveness
Sections
Disallow All Ads
Alternative CME
Article PDF Media