Article Type
Changed
Thu, 03/28/2019 - 16:18
Display Headline
The apples and oranges of cost-effectiveness

Measures of cost-effectiveness are used to compare the merits of diverse medical interventions. A novel drug for metastatic melanoma, for instance, can be compared with statin therapy for primary prevention of cardiovascular events, which in turn can be compared against a surgical procedure for pain, as all are described by a single number: dollars per life-year (or quality-adjusted life-year) gained. Presumably, this number tells practitioners and payers which interventions provide the most benefit for every dollar spent.

However, too often, studies of cost-effectiveness differ from one another. They can be based on data from different types of studies, such as randomized controlled trials, surveys of large payer databases, or single-center chart reviews. The comparison treatments may differ. And the treatments may be of unproven efficacy. In these cases, although the results are all expressed in dollars per life-year, we are comparing apples and oranges.

In the following discussion, I use three key contemporary examples to demonstrate problems central to cost-effectiveness analysis. Together, these examples show that cost-effectiveness, arguably our best tool for comparing apples and oranges, is a lot like apples and oranges itself. I conclude by proposing some solutions.

PROBLEMS WITH COST-EFFECTIVENESS: THREE EXAMPLES

Studies of three therapies highlight the dilemma of cost-effectiveness.

Example 1: Vertebroplasty

Studies of vertebroplasty, a treatment for osteoporotic vertebral fractures that involves injecting polymethylmethacrylate cement into the fractured bone, show the perils of calculating the cost-effectiveness of unproven therapies.

Vertebroplasty gained prominence during the first decade of the 2000s, but in 2009 it was found to be no better than a sham procedure.1,2

In 2008, one study reported that vertebroplasty was cheaper than medical management at 12 months and, thus, cost-effective.3 While this finding was certainly true for the regimen of medical management the authors examined, and while it may very well be true for other protocols for medical management, the finding obscures the fact that a sham procedure would be more cost-effective than either vertebroplasty or medical therapy—an unsettling conclusion.

Example 2: Exemestane

Another dilemma occurs when we can calculate cost-effectiveness for a particular outcome only.

Studies of exemestane (Aromasin), an aromatase inhibitor given to prevent breast cancer, show the difficulty. Recently, exemestane was shown to decrease the rate of breast cancer when used as primary prevention in postmenopausal women.4 What is the costeffectiveness of this therapy?

While we can calculate the dollars per invasive breast cancer averted, we cannot accurately calculate the dollars per life-year gained, as the trial’s end point was not the mortality rate. We can assume that the breast cancer deaths avoided are not negated by deaths incurred through other causes, but this may or may not prove true. Fibrates, for instance, may reduce the rate of cardiovascular death but increase deaths from noncardiac causes, providing no net benefit.5 Such long-term effects remain unknown in the breast cancer study.

Example 3: COX-2 inhibitors

Estimates of cost-effectiveness derived from randomized trials can differ from those derived from real-world studies. Studies of cyclooxygenase 2 (COX-2) inhibitors, which were touted as causing less gastrointestinal bleeding than other nonsteroidal anti-inflammatory drugs, show that cost-effectiveness analyses performed from randomized trials may not mirror dollars spent in real-world practice.

Estimates from randomized controlled trials indicate that a COX-2 inhibitor such as celecoxib (Celebrex) costs $20,000 to prevent one gastrointestinal hemorrhage. However, when calculated using real-world data, that number rises to over $100,000.6

TWO PROPOSED RULES FOR COST-EFFECTIVENESS ANALYSES

How do we reconcile these and related puzzles of cost-effectiveness? First, we should agree on what type of “cost-effectiveness” we are interested in. Most often, we want to know whether the real-world use of a therapy is financially rational. Thus, we are concerned with the effectiveness of therapies and not merely their efficacy in idealized clinical trials.

Furthermore, while real-world cost-effectiveness may change over time, particularly as pricing and delivery vary, we want some assurance that the therapy is truly better than placebo. Therefore, we should only calculate the cost-effectiveness of therapies that have previously demonstrated efficacy in properly controlled, randomized studies.7

To correct the deficiencies noted here, I propose two rules:

  • Cost-effectiveness should be calculated only for therapies that have been proven to work, and
  • These calculations should be done from the best available real-world data.

When both these conditions are met—ie, a therapy has proven efficacy, and we have data from its real-world use—cost-effectiveness analysis provides useful information for payers and practitioners. Then, indeed, a novel anticancer agent costing $30,000 per life-year gained can be compared against primary prevention with statin therapy in patients at elevated cardiovascular risk costing $20,000 per life-year gained.

 

 

CAN PREVENTION BE COMPARED WITH TREATMENT?

This leaves us with the final and most difficult question. Is it right to compare such things?

Having terminal cancer is a different experience than having high cholesterol, and this is the last apple and orange of cost-effectiveness. While a strict utilitarian view of medicine might find these cases indistinguishable, most practitioners and payers are not strict utilitarians. As a society, we tend to favor paying more to treat someone who is ill than paying an equivalent amount to prevent illness. Often, such a stance is criticized as a failure to invest in prevention and primary care, but another explanation is that the bias is a fundamental one of human risk-taking.

Cost-effectiveness is, to a certain degree, a slippery concept, and it is more likely to be “off” when a therapy is given broadly (to hundreds of thousands of people as opposed to hundreds) and taken in a decentralized fashion by individual patients (as opposed to directly observed therapy in an infusion suite). Accordingly, we may favor more expensive therapies, the cost-effectiveness of which can be estimated more precisely.

A recent meta-analysis of statins for primary prevention in high-risk patients found that they were not associated with improvement in the overall rate of death.8 Such a finding dramatically alters our impression of their cost-effectiveness and may explain the bias against investing in such therapies in the first place.

IMPROVING COST-EFFECTIVENESS RESEARCH

Studies of cost-effectiveness are not equivalent. Currently, such studies are apples and oranges, making difficult the very comparison that cost-effectiveness should facilitate. Knowing that a therapy is efficacious should be prerequisite to cost-effectiveness calculations, as should performing calculations under real-world conditions.

Regarding efficacy, it is inappropriate to calculate cost-effectiveness from trials that use only surrogate end points, or those that are improperly controlled.

For example, adding extended-release niacin to statin therapy may raise high-density lipoprotein cholesterol levels by 25%. Such an increase is, in turn, expected to confer a certain reduction in cardiovascular events and death. Thus, the cost-effectiveness of niacin might be calculated as $20,000 per life-year saved. However, adding extended-release niacin to statin therapy does not improve hard outcomes when directly measured,9 and the therapy is not efficacious at all. Its true “dollars per life-year saved” approaches infinity.

Studies that use historical controls, are observational, and are performed at single centers may also mislead us regarding a therapy’s efficacy. Tight glycemic control in intensive care patients initially seemed promising10,11 and cost-effective.12 However, several years later it was found to increase the mortality rate.13

“Real world” means that the best measures of cost-effectiveness will calculate the cost per life saved that the therapy achieves in clinical practice. Adherence to COX-2 inhibitors may not be as strict in the real world as it is in the carefully selected participants in randomized controlled trials, and, thus, the true costs may be higher. A drug that prevents breast cancer may have countervailing effects that may as yet be unknown, or compliance with it may wane over years. Thus, the most accurate measures of cost-effectiveness will examine therapies as best as they can function in typical practice and likely be derived from data sets of large payers or providers.

Finally, it remains an open and contentious issue whether the cost-effectiveness of primary prevention and the cost-effectiveness of treatment are comparable at all. We must continue to ponder and debate this philosophical question.

Certainly, these are the challenges of cost-effectiveness. Equally certain is that—with renewed consideration of the goals of such research, with stricter standards for future studies, and in an economic and political climate unable to sustain the status quo—the challenges must be surmounted.

References
  1. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361:569579.
  2. Buchbinder R, Osborne RH, Ebeling PR, et al. A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures. N Engl J Med 2009; 361:557568.
  3. Masala S, Ciarrapico AM, Konda D, Vinicola V, Mammucari M, Simonetti G. Cost-effectiveness of percutaneous vertebroplasty in osteoporotic vertebral fractures. Eur Spine J 2008; 17:12421250.
  4. Goss PE, Ingle JN, Alés-Martínez JE, et al; NCIC CTG MAP3 Study Investigators. Exemestane for breast-cancer prevention in postmenopausal women. N Engl J Med 2011; 364:23812391.
  5. Studer M, Briel M, Leimenstoll B, Glass TR, Bucher HC. Effect of different antilipidemic agents and diets on mortality: a systematic review. Arch Intern Med 2005; 165:725730.
  6. van Staa TP, Leufkens HG, Zhang B, Smeeth L. A comparison of cost effectiveness using data from randomized trials or actual clinical practice: selective cox-2 inhibitors as an example. PLoS Med 2009; 6:e1000194.
  7. Prasad V, Cifu A. A medical burden of proof: towards a new ethic. BioSocieties 2012; 7:7287.
  8. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:10241031.
  9. National Heart, Lung, and Blood Institute National Institutes of Health. AIM-HIGH: Blinded treatment phase of study stopped. http://www.aimhigh-heart.com/. Accessed January 31, 2012.
  10. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001; 345:13591367.
  11. Van den Berghe G, Wilmer A, Hermans G, et al. Intensive insulin therapy in the medical ICU. N Engl J Med 2006; 354:449461.
  12. Mesotten D, Van den Berghe G. Clinical potential of insulin therapy in critically ill patients. Drugs 2003; 63:625636.
  13. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
Article PDF
Author and Disclosure Information

Vinay Prasad, MD
Department of Medicine, Northwestern University, Chicago, IL

Address: Vinay Prasad, MD, Deparment of Medicine, Northwestern University, 333 E. Ontario, Suite 901B, Chicago, IL 60611; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 79(6)
Publications
Topics
Page Number
377-379
Sections
Author and Disclosure Information

Vinay Prasad, MD
Department of Medicine, Northwestern University, Chicago, IL

Address: Vinay Prasad, MD, Deparment of Medicine, Northwestern University, 333 E. Ontario, Suite 901B, Chicago, IL 60611; e-mail [email protected]

Author and Disclosure Information

Vinay Prasad, MD
Department of Medicine, Northwestern University, Chicago, IL

Address: Vinay Prasad, MD, Deparment of Medicine, Northwestern University, 333 E. Ontario, Suite 901B, Chicago, IL 60611; e-mail [email protected]

Article PDF
Article PDF

Measures of cost-effectiveness are used to compare the merits of diverse medical interventions. A novel drug for metastatic melanoma, for instance, can be compared with statin therapy for primary prevention of cardiovascular events, which in turn can be compared against a surgical procedure for pain, as all are described by a single number: dollars per life-year (or quality-adjusted life-year) gained. Presumably, this number tells practitioners and payers which interventions provide the most benefit for every dollar spent.

However, too often, studies of cost-effectiveness differ from one another. They can be based on data from different types of studies, such as randomized controlled trials, surveys of large payer databases, or single-center chart reviews. The comparison treatments may differ. And the treatments may be of unproven efficacy. In these cases, although the results are all expressed in dollars per life-year, we are comparing apples and oranges.

In the following discussion, I use three key contemporary examples to demonstrate problems central to cost-effectiveness analysis. Together, these examples show that cost-effectiveness, arguably our best tool for comparing apples and oranges, is a lot like apples and oranges itself. I conclude by proposing some solutions.

PROBLEMS WITH COST-EFFECTIVENESS: THREE EXAMPLES

Studies of three therapies highlight the dilemma of cost-effectiveness.

Example 1: Vertebroplasty

Studies of vertebroplasty, a treatment for osteoporotic vertebral fractures that involves injecting polymethylmethacrylate cement into the fractured bone, show the perils of calculating the cost-effectiveness of unproven therapies.

Vertebroplasty gained prominence during the first decade of the 2000s, but in 2009 it was found to be no better than a sham procedure.1,2

In 2008, one study reported that vertebroplasty was cheaper than medical management at 12 months and, thus, cost-effective.3 While this finding was certainly true for the regimen of medical management the authors examined, and while it may very well be true for other protocols for medical management, the finding obscures the fact that a sham procedure would be more cost-effective than either vertebroplasty or medical therapy—an unsettling conclusion.

Example 2: Exemestane

Another dilemma occurs when we can calculate cost-effectiveness for a particular outcome only.

Studies of exemestane (Aromasin), an aromatase inhibitor given to prevent breast cancer, show the difficulty. Recently, exemestane was shown to decrease the rate of breast cancer when used as primary prevention in postmenopausal women.4 What is the costeffectiveness of this therapy?

While we can calculate the dollars per invasive breast cancer averted, we cannot accurately calculate the dollars per life-year gained, as the trial’s end point was not the mortality rate. We can assume that the breast cancer deaths avoided are not negated by deaths incurred through other causes, but this may or may not prove true. Fibrates, for instance, may reduce the rate of cardiovascular death but increase deaths from noncardiac causes, providing no net benefit.5 Such long-term effects remain unknown in the breast cancer study.

Example 3: COX-2 inhibitors

Estimates of cost-effectiveness derived from randomized trials can differ from those derived from real-world studies. Studies of cyclooxygenase 2 (COX-2) inhibitors, which were touted as causing less gastrointestinal bleeding than other nonsteroidal anti-inflammatory drugs, show that cost-effectiveness analyses performed from randomized trials may not mirror dollars spent in real-world practice.

Estimates from randomized controlled trials indicate that a COX-2 inhibitor such as celecoxib (Celebrex) costs $20,000 to prevent one gastrointestinal hemorrhage. However, when calculated using real-world data, that number rises to over $100,000.6

TWO PROPOSED RULES FOR COST-EFFECTIVENESS ANALYSES

How do we reconcile these and related puzzles of cost-effectiveness? First, we should agree on what type of “cost-effectiveness” we are interested in. Most often, we want to know whether the real-world use of a therapy is financially rational. Thus, we are concerned with the effectiveness of therapies and not merely their efficacy in idealized clinical trials.

Furthermore, while real-world cost-effectiveness may change over time, particularly as pricing and delivery vary, we want some assurance that the therapy is truly better than placebo. Therefore, we should only calculate the cost-effectiveness of therapies that have previously demonstrated efficacy in properly controlled, randomized studies.7

To correct the deficiencies noted here, I propose two rules:

  • Cost-effectiveness should be calculated only for therapies that have been proven to work, and
  • These calculations should be done from the best available real-world data.

When both these conditions are met—ie, a therapy has proven efficacy, and we have data from its real-world use—cost-effectiveness analysis provides useful information for payers and practitioners. Then, indeed, a novel anticancer agent costing $30,000 per life-year gained can be compared against primary prevention with statin therapy in patients at elevated cardiovascular risk costing $20,000 per life-year gained.

 

 

CAN PREVENTION BE COMPARED WITH TREATMENT?

This leaves us with the final and most difficult question. Is it right to compare such things?

Having terminal cancer is a different experience than having high cholesterol, and this is the last apple and orange of cost-effectiveness. While a strict utilitarian view of medicine might find these cases indistinguishable, most practitioners and payers are not strict utilitarians. As a society, we tend to favor paying more to treat someone who is ill than paying an equivalent amount to prevent illness. Often, such a stance is criticized as a failure to invest in prevention and primary care, but another explanation is that the bias is a fundamental one of human risk-taking.

Cost-effectiveness is, to a certain degree, a slippery concept, and it is more likely to be “off” when a therapy is given broadly (to hundreds of thousands of people as opposed to hundreds) and taken in a decentralized fashion by individual patients (as opposed to directly observed therapy in an infusion suite). Accordingly, we may favor more expensive therapies, the cost-effectiveness of which can be estimated more precisely.

A recent meta-analysis of statins for primary prevention in high-risk patients found that they were not associated with improvement in the overall rate of death.8 Such a finding dramatically alters our impression of their cost-effectiveness and may explain the bias against investing in such therapies in the first place.

IMPROVING COST-EFFECTIVENESS RESEARCH

Studies of cost-effectiveness are not equivalent. Currently, such studies are apples and oranges, making difficult the very comparison that cost-effectiveness should facilitate. Knowing that a therapy is efficacious should be prerequisite to cost-effectiveness calculations, as should performing calculations under real-world conditions.

Regarding efficacy, it is inappropriate to calculate cost-effectiveness from trials that use only surrogate end points, or those that are improperly controlled.

For example, adding extended-release niacin to statin therapy may raise high-density lipoprotein cholesterol levels by 25%. Such an increase is, in turn, expected to confer a certain reduction in cardiovascular events and death. Thus, the cost-effectiveness of niacin might be calculated as $20,000 per life-year saved. However, adding extended-release niacin to statin therapy does not improve hard outcomes when directly measured,9 and the therapy is not efficacious at all. Its true “dollars per life-year saved” approaches infinity.

Studies that use historical controls, are observational, and are performed at single centers may also mislead us regarding a therapy’s efficacy. Tight glycemic control in intensive care patients initially seemed promising10,11 and cost-effective.12 However, several years later it was found to increase the mortality rate.13

“Real world” means that the best measures of cost-effectiveness will calculate the cost per life saved that the therapy achieves in clinical practice. Adherence to COX-2 inhibitors may not be as strict in the real world as it is in the carefully selected participants in randomized controlled trials, and, thus, the true costs may be higher. A drug that prevents breast cancer may have countervailing effects that may as yet be unknown, or compliance with it may wane over years. Thus, the most accurate measures of cost-effectiveness will examine therapies as best as they can function in typical practice and likely be derived from data sets of large payers or providers.

Finally, it remains an open and contentious issue whether the cost-effectiveness of primary prevention and the cost-effectiveness of treatment are comparable at all. We must continue to ponder and debate this philosophical question.

Certainly, these are the challenges of cost-effectiveness. Equally certain is that—with renewed consideration of the goals of such research, with stricter standards for future studies, and in an economic and political climate unable to sustain the status quo—the challenges must be surmounted.

Measures of cost-effectiveness are used to compare the merits of diverse medical interventions. A novel drug for metastatic melanoma, for instance, can be compared with statin therapy for primary prevention of cardiovascular events, which in turn can be compared against a surgical procedure for pain, as all are described by a single number: dollars per life-year (or quality-adjusted life-year) gained. Presumably, this number tells practitioners and payers which interventions provide the most benefit for every dollar spent.

However, too often, studies of cost-effectiveness differ from one another. They can be based on data from different types of studies, such as randomized controlled trials, surveys of large payer databases, or single-center chart reviews. The comparison treatments may differ. And the treatments may be of unproven efficacy. In these cases, although the results are all expressed in dollars per life-year, we are comparing apples and oranges.

In the following discussion, I use three key contemporary examples to demonstrate problems central to cost-effectiveness analysis. Together, these examples show that cost-effectiveness, arguably our best tool for comparing apples and oranges, is a lot like apples and oranges itself. I conclude by proposing some solutions.

PROBLEMS WITH COST-EFFECTIVENESS: THREE EXAMPLES

Studies of three therapies highlight the dilemma of cost-effectiveness.

Example 1: Vertebroplasty

Studies of vertebroplasty, a treatment for osteoporotic vertebral fractures that involves injecting polymethylmethacrylate cement into the fractured bone, show the perils of calculating the cost-effectiveness of unproven therapies.

Vertebroplasty gained prominence during the first decade of the 2000s, but in 2009 it was found to be no better than a sham procedure.1,2

In 2008, one study reported that vertebroplasty was cheaper than medical management at 12 months and, thus, cost-effective.3 While this finding was certainly true for the regimen of medical management the authors examined, and while it may very well be true for other protocols for medical management, the finding obscures the fact that a sham procedure would be more cost-effective than either vertebroplasty or medical therapy—an unsettling conclusion.

Example 2: Exemestane

Another dilemma occurs when we can calculate cost-effectiveness for a particular outcome only.

Studies of exemestane (Aromasin), an aromatase inhibitor given to prevent breast cancer, show the difficulty. Recently, exemestane was shown to decrease the rate of breast cancer when used as primary prevention in postmenopausal women.4 What is the costeffectiveness of this therapy?

While we can calculate the dollars per invasive breast cancer averted, we cannot accurately calculate the dollars per life-year gained, as the trial’s end point was not the mortality rate. We can assume that the breast cancer deaths avoided are not negated by deaths incurred through other causes, but this may or may not prove true. Fibrates, for instance, may reduce the rate of cardiovascular death but increase deaths from noncardiac causes, providing no net benefit.5 Such long-term effects remain unknown in the breast cancer study.

Example 3: COX-2 inhibitors

Estimates of cost-effectiveness derived from randomized trials can differ from those derived from real-world studies. Studies of cyclooxygenase 2 (COX-2) inhibitors, which were touted as causing less gastrointestinal bleeding than other nonsteroidal anti-inflammatory drugs, show that cost-effectiveness analyses performed from randomized trials may not mirror dollars spent in real-world practice.

Estimates from randomized controlled trials indicate that a COX-2 inhibitor such as celecoxib (Celebrex) costs $20,000 to prevent one gastrointestinal hemorrhage. However, when calculated using real-world data, that number rises to over $100,000.6

TWO PROPOSED RULES FOR COST-EFFECTIVENESS ANALYSES

How do we reconcile these and related puzzles of cost-effectiveness? First, we should agree on what type of “cost-effectiveness” we are interested in. Most often, we want to know whether the real-world use of a therapy is financially rational. Thus, we are concerned with the effectiveness of therapies and not merely their efficacy in idealized clinical trials.

Furthermore, while real-world cost-effectiveness may change over time, particularly as pricing and delivery vary, we want some assurance that the therapy is truly better than placebo. Therefore, we should only calculate the cost-effectiveness of therapies that have previously demonstrated efficacy in properly controlled, randomized studies.7

To correct the deficiencies noted here, I propose two rules:

  • Cost-effectiveness should be calculated only for therapies that have been proven to work, and
  • These calculations should be done from the best available real-world data.

When both these conditions are met—ie, a therapy has proven efficacy, and we have data from its real-world use—cost-effectiveness analysis provides useful information for payers and practitioners. Then, indeed, a novel anticancer agent costing $30,000 per life-year gained can be compared against primary prevention with statin therapy in patients at elevated cardiovascular risk costing $20,000 per life-year gained.

 

 

CAN PREVENTION BE COMPARED WITH TREATMENT?

This leaves us with the final and most difficult question. Is it right to compare such things?

Having terminal cancer is a different experience than having high cholesterol, and this is the last apple and orange of cost-effectiveness. While a strict utilitarian view of medicine might find these cases indistinguishable, most practitioners and payers are not strict utilitarians. As a society, we tend to favor paying more to treat someone who is ill than paying an equivalent amount to prevent illness. Often, such a stance is criticized as a failure to invest in prevention and primary care, but another explanation is that the bias is a fundamental one of human risk-taking.

Cost-effectiveness is, to a certain degree, a slippery concept, and it is more likely to be “off” when a therapy is given broadly (to hundreds of thousands of people as opposed to hundreds) and taken in a decentralized fashion by individual patients (as opposed to directly observed therapy in an infusion suite). Accordingly, we may favor more expensive therapies, the cost-effectiveness of which can be estimated more precisely.

A recent meta-analysis of statins for primary prevention in high-risk patients found that they were not associated with improvement in the overall rate of death.8 Such a finding dramatically alters our impression of their cost-effectiveness and may explain the bias against investing in such therapies in the first place.

IMPROVING COST-EFFECTIVENESS RESEARCH

Studies of cost-effectiveness are not equivalent. Currently, such studies are apples and oranges, making difficult the very comparison that cost-effectiveness should facilitate. Knowing that a therapy is efficacious should be prerequisite to cost-effectiveness calculations, as should performing calculations under real-world conditions.

Regarding efficacy, it is inappropriate to calculate cost-effectiveness from trials that use only surrogate end points, or those that are improperly controlled.

For example, adding extended-release niacin to statin therapy may raise high-density lipoprotein cholesterol levels by 25%. Such an increase is, in turn, expected to confer a certain reduction in cardiovascular events and death. Thus, the cost-effectiveness of niacin might be calculated as $20,000 per life-year saved. However, adding extended-release niacin to statin therapy does not improve hard outcomes when directly measured,9 and the therapy is not efficacious at all. Its true “dollars per life-year saved” approaches infinity.

Studies that use historical controls, are observational, and are performed at single centers may also mislead us regarding a therapy’s efficacy. Tight glycemic control in intensive care patients initially seemed promising10,11 and cost-effective.12 However, several years later it was found to increase the mortality rate.13

“Real world” means that the best measures of cost-effectiveness will calculate the cost per life saved that the therapy achieves in clinical practice. Adherence to COX-2 inhibitors may not be as strict in the real world as it is in the carefully selected participants in randomized controlled trials, and, thus, the true costs may be higher. A drug that prevents breast cancer may have countervailing effects that may as yet be unknown, or compliance with it may wane over years. Thus, the most accurate measures of cost-effectiveness will examine therapies as best as they can function in typical practice and likely be derived from data sets of large payers or providers.

Finally, it remains an open and contentious issue whether the cost-effectiveness of primary prevention and the cost-effectiveness of treatment are comparable at all. We must continue to ponder and debate this philosophical question.

Certainly, these are the challenges of cost-effectiveness. Equally certain is that—with renewed consideration of the goals of such research, with stricter standards for future studies, and in an economic and political climate unable to sustain the status quo—the challenges must be surmounted.

References
  1. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361:569579.
  2. Buchbinder R, Osborne RH, Ebeling PR, et al. A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures. N Engl J Med 2009; 361:557568.
  3. Masala S, Ciarrapico AM, Konda D, Vinicola V, Mammucari M, Simonetti G. Cost-effectiveness of percutaneous vertebroplasty in osteoporotic vertebral fractures. Eur Spine J 2008; 17:12421250.
  4. Goss PE, Ingle JN, Alés-Martínez JE, et al; NCIC CTG MAP3 Study Investigators. Exemestane for breast-cancer prevention in postmenopausal women. N Engl J Med 2011; 364:23812391.
  5. Studer M, Briel M, Leimenstoll B, Glass TR, Bucher HC. Effect of different antilipidemic agents and diets on mortality: a systematic review. Arch Intern Med 2005; 165:725730.
  6. van Staa TP, Leufkens HG, Zhang B, Smeeth L. A comparison of cost effectiveness using data from randomized trials or actual clinical practice: selective cox-2 inhibitors as an example. PLoS Med 2009; 6:e1000194.
  7. Prasad V, Cifu A. A medical burden of proof: towards a new ethic. BioSocieties 2012; 7:7287.
  8. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:10241031.
  9. National Heart, Lung, and Blood Institute National Institutes of Health. AIM-HIGH: Blinded treatment phase of study stopped. http://www.aimhigh-heart.com/. Accessed January 31, 2012.
  10. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001; 345:13591367.
  11. Van den Berghe G, Wilmer A, Hermans G, et al. Intensive insulin therapy in the medical ICU. N Engl J Med 2006; 354:449461.
  12. Mesotten D, Van den Berghe G. Clinical potential of insulin therapy in critically ill patients. Drugs 2003; 63:625636.
  13. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
References
  1. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361:569579.
  2. Buchbinder R, Osborne RH, Ebeling PR, et al. A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures. N Engl J Med 2009; 361:557568.
  3. Masala S, Ciarrapico AM, Konda D, Vinicola V, Mammucari M, Simonetti G. Cost-effectiveness of percutaneous vertebroplasty in osteoporotic vertebral fractures. Eur Spine J 2008; 17:12421250.
  4. Goss PE, Ingle JN, Alés-Martínez JE, et al; NCIC CTG MAP3 Study Investigators. Exemestane for breast-cancer prevention in postmenopausal women. N Engl J Med 2011; 364:23812391.
  5. Studer M, Briel M, Leimenstoll B, Glass TR, Bucher HC. Effect of different antilipidemic agents and diets on mortality: a systematic review. Arch Intern Med 2005; 165:725730.
  6. van Staa TP, Leufkens HG, Zhang B, Smeeth L. A comparison of cost effectiveness using data from randomized trials or actual clinical practice: selective cox-2 inhibitors as an example. PLoS Med 2009; 6:e1000194.
  7. Prasad V, Cifu A. A medical burden of proof: towards a new ethic. BioSocieties 2012; 7:7287.
  8. Ray KK, Seshasai SR, Erqou S, et al. Statins and all-cause mortality in high-risk primary prevention: a meta-analysis of 11 randomized controlled trials involving 65,229 participants. Arch Intern Med 2010; 170:10241031.
  9. National Heart, Lung, and Blood Institute National Institutes of Health. AIM-HIGH: Blinded treatment phase of study stopped. http://www.aimhigh-heart.com/. Accessed January 31, 2012.
  10. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001; 345:13591367.
  11. Van den Berghe G, Wilmer A, Hermans G, et al. Intensive insulin therapy in the medical ICU. N Engl J Med 2006; 354:449461.
  12. Mesotten D, Van den Berghe G. Clinical potential of insulin therapy in critically ill patients. Drugs 2003; 63:625636.
  13. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
Issue
Cleveland Clinic Journal of Medicine - 79(6)
Issue
Cleveland Clinic Journal of Medicine - 79(6)
Page Number
377-379
Page Number
377-379
Publications
Publications
Topics
Article Type
Display Headline
The apples and oranges of cost-effectiveness
Display Headline
The apples and oranges of cost-effectiveness
Sections
Disallow All Ads
Alternative CME
Article PDF Media