User login
PCI and CABG: Use and Abuse
The difficulty in incorporating guidelines into clinical practice is nowhere more evident than in the decisions made based on coronary angiographic images. The controversy has raged from the minute Dr. F. Mason Sones Jr. first directly imaged the left coronary artery more than 40 years ago, and it has been compounded by the evolution of technological advances in both the angiographic laboratory and the operating room.
The coronary angiographers are the major players in determining which revascularization path to take—percutaneous coronary intervention (PCI) or coronary artery bypass graft surgery (CABG)—based on their diagnostic findings. They are forced to make the appropriate decision, based not only on the coronary anatomy, but also on the expertise of their surgical colleagues, the patient's choice and clinical status, and, in large part, the perceptions of their own clinical skills. More recently, their decisions are made under pressure from state and federal supervision, insurers, and their own hospital administrators who often have divergent attitudes toward clinical volumes and costs. Not an easy place to sit when all you wanted to do was to treat heart patients.
The recent publication of information from New York State's cardiac diagnostic catheterization database (Circulation 2010;121:267-75) provides some interesting insight into that decision-making process. The authors reported on 16,142 patients catheterized in 19 hospitals during 2005-2007. Catheterization laboratory cardiologists provided interventional recommendations for 10,333 (64%) of these patients. Study subjects ran the spectrum from asymptomatic angina to non–ST-elevation myocardial infarction. Their recommendations were compared with those of the ACC/AHA guidelines and based solely on angiographic findings. Among the 1,337 patients who had indications for CABG, 712 (53%) were recommended CABG and 455 (34%) were recommended for PCI by the angiographer. Among the 6,051 patients with indications for PCI, 5,660 (94%) were recommended for PCI. In the 1,223 patients in whom no intervention was recommended, 261 (21%) received PCI and 70 (6%) underwent CABG. To no one's surprise, there was a strong bias in the direction of PCI.
In an excellent editorial accompanying the report, Dr. Raymond J. Gibbons of the Mayo Clinic in Rochester, Minn., thoughtfully placed these data in the milieu of the contemporary issues surrounding the use and abuse of coronary angiography and interventions (Circulation 2010;121:194-6). He noted the observed bias toward PCI in the analysis, which is to be expected since there is a “tendency for us to believe in what we do.” Considering the data in general, in the closely monitored environment of New York State, the evidence of abuse or overuse was limited to the 27% of patients who went on to PCI or CABG outside of the guidelines. In view of the fact that the analysis did not consider the medical history and concurrent therapy of patients, overuse of interventions appeared to be limited.
Of more concern to Gibbons and this editor is the question of the regional variation in the use of both diagnostic angiography and vascular interventions. In New York State, the performance rate of PCI in different regional health care markets varied between 6.2 and 13.0 interventions per 1,000 Medicare beneficiaries. The New York State rate was similar to that in Rochester, Minn., and Cleveland However, the highest regional PCI rate in New York State was lower than 69 of the 305 health care markets in the United States. Similar variation was observed in the use of CABG, where the highest rate in New York was less than half the rate observed in McAllen, Tex. These wide variations bespeak the potential for decision making that is well outside guideline recommendations. We have expressed in this column that these are “only” guidelines. However, it behooves all of those who are straying that far outside of the guideline recommendations to be certain of the appropriateness of our decisions.
Most of us are not as much under the microscope as our colleagues in New York State. But as the Centers for Medicare and Medicaid Services agency intrudes more into our practice, the microscope likely will be trained on all of us. Finding the best answers to clinical care is not easy. We all become driven by our own personal experiences, but it is helpful to temper our experiences with those of our colleagues.
The difficulty in incorporating guidelines into clinical practice is nowhere more evident than in the decisions made based on coronary angiographic images. The controversy has raged from the minute Dr. F. Mason Sones Jr. first directly imaged the left coronary artery more than 40 years ago, and it has been compounded by the evolution of technological advances in both the angiographic laboratory and the operating room.
The coronary angiographers are the major players in determining which revascularization path to take—percutaneous coronary intervention (PCI) or coronary artery bypass graft surgery (CABG)—based on their diagnostic findings. They are forced to make the appropriate decision, based not only on the coronary anatomy, but also on the expertise of their surgical colleagues, the patient's choice and clinical status, and, in large part, the perceptions of their own clinical skills. More recently, their decisions are made under pressure from state and federal supervision, insurers, and their own hospital administrators who often have divergent attitudes toward clinical volumes and costs. Not an easy place to sit when all you wanted to do was to treat heart patients.
The recent publication of information from New York State's cardiac diagnostic catheterization database (Circulation 2010;121:267-75) provides some interesting insight into that decision-making process. The authors reported on 16,142 patients catheterized in 19 hospitals during 2005-2007. Catheterization laboratory cardiologists provided interventional recommendations for 10,333 (64%) of these patients. Study subjects ran the spectrum from asymptomatic angina to non–ST-elevation myocardial infarction. Their recommendations were compared with those of the ACC/AHA guidelines and based solely on angiographic findings. Among the 1,337 patients who had indications for CABG, 712 (53%) were recommended CABG and 455 (34%) were recommended for PCI by the angiographer. Among the 6,051 patients with indications for PCI, 5,660 (94%) were recommended for PCI. In the 1,223 patients in whom no intervention was recommended, 261 (21%) received PCI and 70 (6%) underwent CABG. To no one's surprise, there was a strong bias in the direction of PCI.
In an excellent editorial accompanying the report, Dr. Raymond J. Gibbons of the Mayo Clinic in Rochester, Minn., thoughtfully placed these data in the milieu of the contemporary issues surrounding the use and abuse of coronary angiography and interventions (Circulation 2010;121:194-6). He noted the observed bias toward PCI in the analysis, which is to be expected since there is a “tendency for us to believe in what we do.” Considering the data in general, in the closely monitored environment of New York State, the evidence of abuse or overuse was limited to the 27% of patients who went on to PCI or CABG outside of the guidelines. In view of the fact that the analysis did not consider the medical history and concurrent therapy of patients, overuse of interventions appeared to be limited.
Of more concern to Gibbons and this editor is the question of the regional variation in the use of both diagnostic angiography and vascular interventions. In New York State, the performance rate of PCI in different regional health care markets varied between 6.2 and 13.0 interventions per 1,000 Medicare beneficiaries. The New York State rate was similar to that in Rochester, Minn., and Cleveland However, the highest regional PCI rate in New York State was lower than 69 of the 305 health care markets in the United States. Similar variation was observed in the use of CABG, where the highest rate in New York was less than half the rate observed in McAllen, Tex. These wide variations bespeak the potential for decision making that is well outside guideline recommendations. We have expressed in this column that these are “only” guidelines. However, it behooves all of those who are straying that far outside of the guideline recommendations to be certain of the appropriateness of our decisions.
Most of us are not as much under the microscope as our colleagues in New York State. But as the Centers for Medicare and Medicaid Services agency intrudes more into our practice, the microscope likely will be trained on all of us. Finding the best answers to clinical care is not easy. We all become driven by our own personal experiences, but it is helpful to temper our experiences with those of our colleagues.
The difficulty in incorporating guidelines into clinical practice is nowhere more evident than in the decisions made based on coronary angiographic images. The controversy has raged from the minute Dr. F. Mason Sones Jr. first directly imaged the left coronary artery more than 40 years ago, and it has been compounded by the evolution of technological advances in both the angiographic laboratory and the operating room.
The coronary angiographers are the major players in determining which revascularization path to take—percutaneous coronary intervention (PCI) or coronary artery bypass graft surgery (CABG)—based on their diagnostic findings. They are forced to make the appropriate decision, based not only on the coronary anatomy, but also on the expertise of their surgical colleagues, the patient's choice and clinical status, and, in large part, the perceptions of their own clinical skills. More recently, their decisions are made under pressure from state and federal supervision, insurers, and their own hospital administrators who often have divergent attitudes toward clinical volumes and costs. Not an easy place to sit when all you wanted to do was to treat heart patients.
The recent publication of information from New York State's cardiac diagnostic catheterization database (Circulation 2010;121:267-75) provides some interesting insight into that decision-making process. The authors reported on 16,142 patients catheterized in 19 hospitals during 2005-2007. Catheterization laboratory cardiologists provided interventional recommendations for 10,333 (64%) of these patients. Study subjects ran the spectrum from asymptomatic angina to non–ST-elevation myocardial infarction. Their recommendations were compared with those of the ACC/AHA guidelines and based solely on angiographic findings. Among the 1,337 patients who had indications for CABG, 712 (53%) were recommended CABG and 455 (34%) were recommended for PCI by the angiographer. Among the 6,051 patients with indications for PCI, 5,660 (94%) were recommended for PCI. In the 1,223 patients in whom no intervention was recommended, 261 (21%) received PCI and 70 (6%) underwent CABG. To no one's surprise, there was a strong bias in the direction of PCI.
In an excellent editorial accompanying the report, Dr. Raymond J. Gibbons of the Mayo Clinic in Rochester, Minn., thoughtfully placed these data in the milieu of the contemporary issues surrounding the use and abuse of coronary angiography and interventions (Circulation 2010;121:194-6). He noted the observed bias toward PCI in the analysis, which is to be expected since there is a “tendency for us to believe in what we do.” Considering the data in general, in the closely monitored environment of New York State, the evidence of abuse or overuse was limited to the 27% of patients who went on to PCI or CABG outside of the guidelines. In view of the fact that the analysis did not consider the medical history and concurrent therapy of patients, overuse of interventions appeared to be limited.
Of more concern to Gibbons and this editor is the question of the regional variation in the use of both diagnostic angiography and vascular interventions. In New York State, the performance rate of PCI in different regional health care markets varied between 6.2 and 13.0 interventions per 1,000 Medicare beneficiaries. The New York State rate was similar to that in Rochester, Minn., and Cleveland However, the highest regional PCI rate in New York State was lower than 69 of the 305 health care markets in the United States. Similar variation was observed in the use of CABG, where the highest rate in New York was less than half the rate observed in McAllen, Tex. These wide variations bespeak the potential for decision making that is well outside guideline recommendations. We have expressed in this column that these are “only” guidelines. However, it behooves all of those who are straying that far outside of the guideline recommendations to be certain of the appropriateness of our decisions.
Most of us are not as much under the microscope as our colleagues in New York State. But as the Centers for Medicare and Medicaid Services agency intrudes more into our practice, the microscope likely will be trained on all of us. Finding the best answers to clinical care is not easy. We all become driven by our own personal experiences, but it is helpful to temper our experiences with those of our colleagues.
Getting CME Back on Track
There was a time in the distant past—well, slightly less than a half a century ago—when academic physicians and medical schools took responsibility for the postgraduate education of their alumni and their community doctors. Faculty members were actually sent out to give talks and clinics—without pay. One of the benefits of this process was the communication between the medical center and its community of physicians. Although information was shared, the most important aspect of this interaction was providing a name and a face and a telephone number, so physicians could find help to solve the problems of their patients.
Along the way, something knocked this continuing medical education train off the rails: the pharmaceutical industry. Medical schools and teaching hospitals were quick to pass the responsibilities on to pharma in an atmosphere where the profit motives of both were intermingled. Since then, medical educators have been trying to get that train back on track after realizing the dubious nature of the relationship between industry and CME.
The pharmaceutical industry, under intense pressure from Congress, is pulling back its support for CME. Medical educators are trying to develop a new framework for the support of practicing physicians, in an increasingly complex environment where instant education is critically needed. In some instances, industry is establishing open-ended grants to medicals schools, such as the recent offer by Pfizer to Stanford University (New York Times, Jan. 11, 2010). Critics have rightfully voiced suspicion about this relationship.
Other institutions such as Harvard Medical School have come to realize that their cozy relationships with industry over the last half century may have compromised the medical message. Harvard no longer allows its faculty to give industry-supported lectures, and has limited the fees received by faculty leaders for a variety of services including board member ship (New York Times, Jan. 3, 2010). And not surprisingly, the Institute of Medicine is proposing the creation of a Continuing Professional Development Institute to ensure that the workforce is prepared to provide high-quality and safe care (search cpdi at www.nap.edu
Whatever happened to the idea that the teaching hospitals have a responsibility to provide CME support to their medical communities? This should be particularly important to state medical schools, which have a moral and administrative responsibility to provide an educational framework for physicians to meet their licensure requirements without depending on the pharmaceutical industry or the federal government. Medical systems that provide large parts of community care also have a responsibility to provide an educational structure that supports quality care. Instead of advertising on television, they should spend their money on supporting the needs of the community, and provide the much-needed link between the family doctor and the consultant, without using the emergency department as the conduit.
In the meantime, large gaps are occurring in the CME structure as the pharmaceutical industry withdraws from the arena. Many physicians are turning to the Internet for information. The explosion of new technology and therapy occurring in medicine calls for a major changes in how we provide CME. Missing in many of the proposed CME changes are methodologies to strengthen the communications between the consultant and the primary care doctor. We must meet the challenge, if we are to translate medical research to the bedside and improve the quality of care.
There was a time in the distant past—well, slightly less than a half a century ago—when academic physicians and medical schools took responsibility for the postgraduate education of their alumni and their community doctors. Faculty members were actually sent out to give talks and clinics—without pay. One of the benefits of this process was the communication between the medical center and its community of physicians. Although information was shared, the most important aspect of this interaction was providing a name and a face and a telephone number, so physicians could find help to solve the problems of their patients.
Along the way, something knocked this continuing medical education train off the rails: the pharmaceutical industry. Medical schools and teaching hospitals were quick to pass the responsibilities on to pharma in an atmosphere where the profit motives of both were intermingled. Since then, medical educators have been trying to get that train back on track after realizing the dubious nature of the relationship between industry and CME.
The pharmaceutical industry, under intense pressure from Congress, is pulling back its support for CME. Medical educators are trying to develop a new framework for the support of practicing physicians, in an increasingly complex environment where instant education is critically needed. In some instances, industry is establishing open-ended grants to medicals schools, such as the recent offer by Pfizer to Stanford University (New York Times, Jan. 11, 2010). Critics have rightfully voiced suspicion about this relationship.
Other institutions such as Harvard Medical School have come to realize that their cozy relationships with industry over the last half century may have compromised the medical message. Harvard no longer allows its faculty to give industry-supported lectures, and has limited the fees received by faculty leaders for a variety of services including board member ship (New York Times, Jan. 3, 2010). And not surprisingly, the Institute of Medicine is proposing the creation of a Continuing Professional Development Institute to ensure that the workforce is prepared to provide high-quality and safe care (search cpdi at www.nap.edu
Whatever happened to the idea that the teaching hospitals have a responsibility to provide CME support to their medical communities? This should be particularly important to state medical schools, which have a moral and administrative responsibility to provide an educational framework for physicians to meet their licensure requirements without depending on the pharmaceutical industry or the federal government. Medical systems that provide large parts of community care also have a responsibility to provide an educational structure that supports quality care. Instead of advertising on television, they should spend their money on supporting the needs of the community, and provide the much-needed link between the family doctor and the consultant, without using the emergency department as the conduit.
In the meantime, large gaps are occurring in the CME structure as the pharmaceutical industry withdraws from the arena. Many physicians are turning to the Internet for information. The explosion of new technology and therapy occurring in medicine calls for a major changes in how we provide CME. Missing in many of the proposed CME changes are methodologies to strengthen the communications between the consultant and the primary care doctor. We must meet the challenge, if we are to translate medical research to the bedside and improve the quality of care.
There was a time in the distant past—well, slightly less than a half a century ago—when academic physicians and medical schools took responsibility for the postgraduate education of their alumni and their community doctors. Faculty members were actually sent out to give talks and clinics—without pay. One of the benefits of this process was the communication between the medical center and its community of physicians. Although information was shared, the most important aspect of this interaction was providing a name and a face and a telephone number, so physicians could find help to solve the problems of their patients.
Along the way, something knocked this continuing medical education train off the rails: the pharmaceutical industry. Medical schools and teaching hospitals were quick to pass the responsibilities on to pharma in an atmosphere where the profit motives of both were intermingled. Since then, medical educators have been trying to get that train back on track after realizing the dubious nature of the relationship between industry and CME.
The pharmaceutical industry, under intense pressure from Congress, is pulling back its support for CME. Medical educators are trying to develop a new framework for the support of practicing physicians, in an increasingly complex environment where instant education is critically needed. In some instances, industry is establishing open-ended grants to medicals schools, such as the recent offer by Pfizer to Stanford University (New York Times, Jan. 11, 2010). Critics have rightfully voiced suspicion about this relationship.
Other institutions such as Harvard Medical School have come to realize that their cozy relationships with industry over the last half century may have compromised the medical message. Harvard no longer allows its faculty to give industry-supported lectures, and has limited the fees received by faculty leaders for a variety of services including board member ship (New York Times, Jan. 3, 2010). And not surprisingly, the Institute of Medicine is proposing the creation of a Continuing Professional Development Institute to ensure that the workforce is prepared to provide high-quality and safe care (search cpdi at www.nap.edu
Whatever happened to the idea that the teaching hospitals have a responsibility to provide CME support to their medical communities? This should be particularly important to state medical schools, which have a moral and administrative responsibility to provide an educational framework for physicians to meet their licensure requirements without depending on the pharmaceutical industry or the federal government. Medical systems that provide large parts of community care also have a responsibility to provide an educational structure that supports quality care. Instead of advertising on television, they should spend their money on supporting the needs of the community, and provide the much-needed link between the family doctor and the consultant, without using the emergency department as the conduit.
In the meantime, large gaps are occurring in the CME structure as the pharmaceutical industry withdraws from the arena. Many physicians are turning to the Internet for information. The explosion of new technology and therapy occurring in medicine calls for a major changes in how we provide CME. Missing in many of the proposed CME changes are methodologies to strengthen the communications between the consultant and the primary care doctor. We must meet the challenge, if we are to translate medical research to the bedside and improve the quality of care.
End-of-Life Care: Its Cost in Heart Failure
End-of-life care is considered a factor in the explosion of American health care costs in the past decade, and decreasing its cost is one of the targets included in current health care legislation.
Expenses incurred for end-of-life care are part of the estimated $700 billion wasted in health care annually in the United States. Mitigating these costs can lead to a significant decrease in the cost of health care and insurance premiums.
Cost comparisons of large referral centers such as the Mayo Clinic with hospitals that provide front-line care in urban centers have provided examples of this excess. Health planners have reported that the costs of end-of-life care in referral centers are half as much as those at other hospitals. They have given little credence to the variation in socioeconomic environments in which health care is provided.
The examination of comparative data has emphasized the high costs of technology and an array of expensive consultants who are brought to the bedsides of terminally ill patients. Those studies have suggested that little patient benefit results from these futile and expensive efforts.
All of these end-of-life analyses have consistently used retrospective analysis of patients who have died, examining the cost of their care from hospital admission to death.
A recent analysis of six major teaching hospitals in California considered the issue from a different perspective by “looking forward” or prospectively from the time of admission at the costs and benefits of intensive medical care for patients identified as high risk (Circ. Cardiovasc. Qual. Outcomes 2009;2:548–57).
Researchers examined the relationship of in-hospital resource use on mortality during a 180-day period in 3,999 patients hospitalized for heart failure “looking forward” or prospectively, to 1,639 patients who died during the same time period “looking backward” or retrospectively.
Patients in the two groups were risk adjusted to provide comparability of baseline characteristics.
The investigators found that in a prospective analysis of these teaching hospitals, the increased resource utilization was associated with improved mortality outcomes and lower costs.
The number of days hospitalized also was significantly decreased in the survival study compared with retrospective analysis of the patients who had died.
There was considerable variation in resource use between hospitals but even within the hospitals studied, the institution with the highest cost had the best outcome. In-hospital mortality for the “looking forward” group ranged between 2.2% and 4.7% and the 180-day mortality ranged from 17% to 26%. These rates are very similar to previously reported registry data for heart failure admission.
One might question whether heart failure patients should be used to examine end-of-life issues.
It is not easy for physicians to identify patents who are at high risk upon admission. Many patients who are admitted with severe heart failure improve dramatically with aggressive therapy, and most of them leave the hospital.
Nevertheless, within the population of severely ill heart failure patients there are individuals whose 180-day mortality bespeaks a significant rate that is comparable with patients who have cancer. In fact, it is clear that within the heart failure population, severe mortality populations exist that current therapy has had little impact on and that are difficult to identify upon admission.
The pressure to establish methodologies to limit health care costs within the framework of new health care legislation requires a more sophisticated approach to the modulation of cost.
The analysis cited above emphasizes the complexity of the cost issues that go into choosing care pathways at the bedside. The emphasis on the cost differential between referral centers such as the Mayo Clinic and teaching hospitals that provide acute urban care based on fatal outcomes does not help in the resolution of the therapeutic decision in high-risk patients.
This new analysis raises important questions and provides a methodology that can expand our understanding of the complexities of end-of-life care and its costs. It can identify where efficiencies can be introduced to bring comfort to both our patients and our pocketbooks.
End-of-life care is considered a factor in the explosion of American health care costs in the past decade, and decreasing its cost is one of the targets included in current health care legislation.
Expenses incurred for end-of-life care are part of the estimated $700 billion wasted in health care annually in the United States. Mitigating these costs can lead to a significant decrease in the cost of health care and insurance premiums.
Cost comparisons of large referral centers such as the Mayo Clinic with hospitals that provide front-line care in urban centers have provided examples of this excess. Health planners have reported that the costs of end-of-life care in referral centers are half as much as those at other hospitals. They have given little credence to the variation in socioeconomic environments in which health care is provided.
The examination of comparative data has emphasized the high costs of technology and an array of expensive consultants who are brought to the bedsides of terminally ill patients. Those studies have suggested that little patient benefit results from these futile and expensive efforts.
All of these end-of-life analyses have consistently used retrospective analysis of patients who have died, examining the cost of their care from hospital admission to death.
A recent analysis of six major teaching hospitals in California considered the issue from a different perspective by “looking forward” or prospectively from the time of admission at the costs and benefits of intensive medical care for patients identified as high risk (Circ. Cardiovasc. Qual. Outcomes 2009;2:548–57).
Researchers examined the relationship of in-hospital resource use on mortality during a 180-day period in 3,999 patients hospitalized for heart failure “looking forward” or prospectively, to 1,639 patients who died during the same time period “looking backward” or retrospectively.
Patients in the two groups were risk adjusted to provide comparability of baseline characteristics.
The investigators found that in a prospective analysis of these teaching hospitals, the increased resource utilization was associated with improved mortality outcomes and lower costs.
The number of days hospitalized also was significantly decreased in the survival study compared with retrospective analysis of the patients who had died.
There was considerable variation in resource use between hospitals but even within the hospitals studied, the institution with the highest cost had the best outcome. In-hospital mortality for the “looking forward” group ranged between 2.2% and 4.7% and the 180-day mortality ranged from 17% to 26%. These rates are very similar to previously reported registry data for heart failure admission.
One might question whether heart failure patients should be used to examine end-of-life issues.
It is not easy for physicians to identify patents who are at high risk upon admission. Many patients who are admitted with severe heart failure improve dramatically with aggressive therapy, and most of them leave the hospital.
Nevertheless, within the population of severely ill heart failure patients there are individuals whose 180-day mortality bespeaks a significant rate that is comparable with patients who have cancer. In fact, it is clear that within the heart failure population, severe mortality populations exist that current therapy has had little impact on and that are difficult to identify upon admission.
The pressure to establish methodologies to limit health care costs within the framework of new health care legislation requires a more sophisticated approach to the modulation of cost.
The analysis cited above emphasizes the complexity of the cost issues that go into choosing care pathways at the bedside. The emphasis on the cost differential between referral centers such as the Mayo Clinic and teaching hospitals that provide acute urban care based on fatal outcomes does not help in the resolution of the therapeutic decision in high-risk patients.
This new analysis raises important questions and provides a methodology that can expand our understanding of the complexities of end-of-life care and its costs. It can identify where efficiencies can be introduced to bring comfort to both our patients and our pocketbooks.
End-of-life care is considered a factor in the explosion of American health care costs in the past decade, and decreasing its cost is one of the targets included in current health care legislation.
Expenses incurred for end-of-life care are part of the estimated $700 billion wasted in health care annually in the United States. Mitigating these costs can lead to a significant decrease in the cost of health care and insurance premiums.
Cost comparisons of large referral centers such as the Mayo Clinic with hospitals that provide front-line care in urban centers have provided examples of this excess. Health planners have reported that the costs of end-of-life care in referral centers are half as much as those at other hospitals. They have given little credence to the variation in socioeconomic environments in which health care is provided.
The examination of comparative data has emphasized the high costs of technology and an array of expensive consultants who are brought to the bedsides of terminally ill patients. Those studies have suggested that little patient benefit results from these futile and expensive efforts.
All of these end-of-life analyses have consistently used retrospective analysis of patients who have died, examining the cost of their care from hospital admission to death.
A recent analysis of six major teaching hospitals in California considered the issue from a different perspective by “looking forward” or prospectively from the time of admission at the costs and benefits of intensive medical care for patients identified as high risk (Circ. Cardiovasc. Qual. Outcomes 2009;2:548–57).
Researchers examined the relationship of in-hospital resource use on mortality during a 180-day period in 3,999 patients hospitalized for heart failure “looking forward” or prospectively, to 1,639 patients who died during the same time period “looking backward” or retrospectively.
Patients in the two groups were risk adjusted to provide comparability of baseline characteristics.
The investigators found that in a prospective analysis of these teaching hospitals, the increased resource utilization was associated with improved mortality outcomes and lower costs.
The number of days hospitalized also was significantly decreased in the survival study compared with retrospective analysis of the patients who had died.
There was considerable variation in resource use between hospitals but even within the hospitals studied, the institution with the highest cost had the best outcome. In-hospital mortality for the “looking forward” group ranged between 2.2% and 4.7% and the 180-day mortality ranged from 17% to 26%. These rates are very similar to previously reported registry data for heart failure admission.
One might question whether heart failure patients should be used to examine end-of-life issues.
It is not easy for physicians to identify patents who are at high risk upon admission. Many patients who are admitted with severe heart failure improve dramatically with aggressive therapy, and most of them leave the hospital.
Nevertheless, within the population of severely ill heart failure patients there are individuals whose 180-day mortality bespeaks a significant rate that is comparable with patients who have cancer. In fact, it is clear that within the heart failure population, severe mortality populations exist that current therapy has had little impact on and that are difficult to identify upon admission.
The pressure to establish methodologies to limit health care costs within the framework of new health care legislation requires a more sophisticated approach to the modulation of cost.
The analysis cited above emphasizes the complexity of the cost issues that go into choosing care pathways at the bedside. The emphasis on the cost differential between referral centers such as the Mayo Clinic and teaching hospitals that provide acute urban care based on fatal outcomes does not help in the resolution of the therapeutic decision in high-risk patients.
This new analysis raises important questions and provides a methodology that can expand our understanding of the complexities of end-of-life care and its costs. It can identify where efficiencies can be introduced to bring comfort to both our patients and our pocketbooks.
Comparative Effectiveness: Are We Ready?
The cardiology community, under the leadership of the American College of Cardiology and American Heart Association, has struggled for the last 2 decades with the task of creating appropriateness guidelines for the care of cardiac patients.
The rigorous, open process has been a struggle to provide a scientific foundation for the confirmation of guideline recommendations.
It has taken on a new dimension with the forthcoming health care reform under consideration by Congress, which has given comparative effectiveness research (CER) a major role in establishing the payment parameters for appropriate use of drugs and devices within Medicare. The final construct of the CER process will have an immense impact on how we practice cardiology and will extend well beyond the use of guidelines in our clinical decision making.
It has been estimated that 30% of all medical spending has no discernible benefit. The bill for this useless care totals approximately $700 billion.
To deal with this presumed “waste,” the federal government plans to use CER to gain more data to establish guidelines and recommendations about the efficacy of current therapy primarily in the Medicare population. Because of the size of Medicare, these changes will likely impact the entire insurance industry.
Some political conservatives would suggest that this will result in rationing of care. In fact, this is precisely what is intended. But of course we have had economically imposed rationing for some time.
Ensuring the most judicious use of resources measured by effectiveness and cost is certainly a worthwhile goal. Whether the medical community is now or will ever be ready to fill this role is open to considerable question.
Cardiology guidelines that have been developed have had limited success. At best, they have provided marginal improvement in clinical care (Am. Heart J. 2009;158:546–53). Only 19% of current guidelines are supported by
Even when a clinical trial shows a positive benefit, its effect on clinical care is slow. Rarely is a single trial's demonstration of a drug's efficacy sufficient to change medical care substantially. The development of convincing data that will gain the approval of the Food and Drug Administration costs money and time.
Changes are even more difficult when carrying out comparative trials as CER advocates propose. By their nature, comparative trials in which one therapy is compared with another take large patient numbers.
One of the few comparative trials sponsored by the National Heart, Lung, and Blood Institute—the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial—compared antihypertensive drugs and was planned to provide the final answer to the question of what is the most effective therapy. ALLHAT's outcome had little effect on clinical care and cost more than $100 million. Few comparative trials have been carried out since then because of the expense. Randomized trials sponsored by pharmaceutical companies are planned to test new drugs in comparison to older, accepted medication or a placebo.
The current Congressional plan includes more than $1 billion for studies comparing drugs and devices to “save money and lives.”
It is proposed that a federal health board or a comparative effectiveness agency will be created to institute a process of compliance and implementation of its decisions. CER decisions could become de facto administrative decisions and could determine what care can and should be provided within Medicare. It is anticipated that in some instances, the agency will seek input from specialty societies rather than leaving the decision process to a group of experts in Washington.
How comparative effective research will be carried out is now under review and its ultimate impact on care remains uncertain.
It is certainly unrealistic to presume that we are close to providing the scientific answer to the appropriateness conundrum. Effectiveness is not easily defined, but we can usually define ineffectiveness when we see it. Bridging the gap between these two extremes is easier said than done.
The cardiology community, under the leadership of the American College of Cardiology and American Heart Association, has struggled for the last 2 decades with the task of creating appropriateness guidelines for the care of cardiac patients.
The rigorous, open process has been a struggle to provide a scientific foundation for the confirmation of guideline recommendations.
It has taken on a new dimension with the forthcoming health care reform under consideration by Congress, which has given comparative effectiveness research (CER) a major role in establishing the payment parameters for appropriate use of drugs and devices within Medicare. The final construct of the CER process will have an immense impact on how we practice cardiology and will extend well beyond the use of guidelines in our clinical decision making.
It has been estimated that 30% of all medical spending has no discernible benefit. The bill for this useless care totals approximately $700 billion.
To deal with this presumed “waste,” the federal government plans to use CER to gain more data to establish guidelines and recommendations about the efficacy of current therapy primarily in the Medicare population. Because of the size of Medicare, these changes will likely impact the entire insurance industry.
Some political conservatives would suggest that this will result in rationing of care. In fact, this is precisely what is intended. But of course we have had economically imposed rationing for some time.
Ensuring the most judicious use of resources measured by effectiveness and cost is certainly a worthwhile goal. Whether the medical community is now or will ever be ready to fill this role is open to considerable question.
Cardiology guidelines that have been developed have had limited success. At best, they have provided marginal improvement in clinical care (Am. Heart J. 2009;158:546–53). Only 19% of current guidelines are supported by
Even when a clinical trial shows a positive benefit, its effect on clinical care is slow. Rarely is a single trial's demonstration of a drug's efficacy sufficient to change medical care substantially. The development of convincing data that will gain the approval of the Food and Drug Administration costs money and time.
Changes are even more difficult when carrying out comparative trials as CER advocates propose. By their nature, comparative trials in which one therapy is compared with another take large patient numbers.
One of the few comparative trials sponsored by the National Heart, Lung, and Blood Institute—the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial—compared antihypertensive drugs and was planned to provide the final answer to the question of what is the most effective therapy. ALLHAT's outcome had little effect on clinical care and cost more than $100 million. Few comparative trials have been carried out since then because of the expense. Randomized trials sponsored by pharmaceutical companies are planned to test new drugs in comparison to older, accepted medication or a placebo.
The current Congressional plan includes more than $1 billion for studies comparing drugs and devices to “save money and lives.”
It is proposed that a federal health board or a comparative effectiveness agency will be created to institute a process of compliance and implementation of its decisions. CER decisions could become de facto administrative decisions and could determine what care can and should be provided within Medicare. It is anticipated that in some instances, the agency will seek input from specialty societies rather than leaving the decision process to a group of experts in Washington.
How comparative effective research will be carried out is now under review and its ultimate impact on care remains uncertain.
It is certainly unrealistic to presume that we are close to providing the scientific answer to the appropriateness conundrum. Effectiveness is not easily defined, but we can usually define ineffectiveness when we see it. Bridging the gap between these two extremes is easier said than done.
The cardiology community, under the leadership of the American College of Cardiology and American Heart Association, has struggled for the last 2 decades with the task of creating appropriateness guidelines for the care of cardiac patients.
The rigorous, open process has been a struggle to provide a scientific foundation for the confirmation of guideline recommendations.
It has taken on a new dimension with the forthcoming health care reform under consideration by Congress, which has given comparative effectiveness research (CER) a major role in establishing the payment parameters for appropriate use of drugs and devices within Medicare. The final construct of the CER process will have an immense impact on how we practice cardiology and will extend well beyond the use of guidelines in our clinical decision making.
It has been estimated that 30% of all medical spending has no discernible benefit. The bill for this useless care totals approximately $700 billion.
To deal with this presumed “waste,” the federal government plans to use CER to gain more data to establish guidelines and recommendations about the efficacy of current therapy primarily in the Medicare population. Because of the size of Medicare, these changes will likely impact the entire insurance industry.
Some political conservatives would suggest that this will result in rationing of care. In fact, this is precisely what is intended. But of course we have had economically imposed rationing for some time.
Ensuring the most judicious use of resources measured by effectiveness and cost is certainly a worthwhile goal. Whether the medical community is now or will ever be ready to fill this role is open to considerable question.
Cardiology guidelines that have been developed have had limited success. At best, they have provided marginal improvement in clinical care (Am. Heart J. 2009;158:546–53). Only 19% of current guidelines are supported by
Even when a clinical trial shows a positive benefit, its effect on clinical care is slow. Rarely is a single trial's demonstration of a drug's efficacy sufficient to change medical care substantially. The development of convincing data that will gain the approval of the Food and Drug Administration costs money and time.
Changes are even more difficult when carrying out comparative trials as CER advocates propose. By their nature, comparative trials in which one therapy is compared with another take large patient numbers.
One of the few comparative trials sponsored by the National Heart, Lung, and Blood Institute—the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial—compared antihypertensive drugs and was planned to provide the final answer to the question of what is the most effective therapy. ALLHAT's outcome had little effect on clinical care and cost more than $100 million. Few comparative trials have been carried out since then because of the expense. Randomized trials sponsored by pharmaceutical companies are planned to test new drugs in comparison to older, accepted medication or a placebo.
The current Congressional plan includes more than $1 billion for studies comparing drugs and devices to “save money and lives.”
It is proposed that a federal health board or a comparative effectiveness agency will be created to institute a process of compliance and implementation of its decisions. CER decisions could become de facto administrative decisions and could determine what care can and should be provided within Medicare. It is anticipated that in some instances, the agency will seek input from specialty societies rather than leaving the decision process to a group of experts in Washington.
How comparative effective research will be carried out is now under review and its ultimate impact on care remains uncertain.
It is certainly unrealistic to presume that we are close to providing the scientific answer to the appropriateness conundrum. Effectiveness is not easily defined, but we can usually define ineffectiveness when we see it. Bridging the gap between these two extremes is easier said than done.
How Many Cardiologists Do We Need?
The nature and distribution of the cardiology workforce has been at issue for the American College of Cardiology for more than 20 years. It impacts the college's ability to meet community requirements for quality care, and it affects the income of cardiologists.
On the basis of current projections, there will be a major shortfall of cardiologists in 2025, according to a recent Lewin Group report. To some extent, the shortage will affect interventional cardiologists, but the major impact will be felt in the ranks of general cardiologists like me, where the current shortage of 1,600 will swell to 16,000 by 2025.
Projecting the future is, at best, uncertain. These predictions are based on two measurements: first, that there are 1.8 applicants for every cardiovascular training slot, and second, that based on ACC academic, pediatric, and private practice surveys, there is “substantial excess demand for new cardiologists which cannot be met with the current number of fellows completing training annually.” The foundation of these estimates is open to some question.
Not too long ago, the size of the cardiovascular workforce was deemed adequate, with the expectation that interposition of managed care in the medical marketplace would limit patients' ability to get onto our appointment schedules. But then, managed care went up in smoke, and interventional cardiologists found vascular targets not only in the coronary bed but in the head and the legs, not to mention the aortic and mitral valves.
What the future holds for our specialty is uncertain, but if health care reform actually happens and the millions of the uninsured begin to seek medical care, one could anticipate an increase in demand for cardiology services. But in reality, most of people in need of our cardiac services are already covered by Medicare.
Also, it is worth considering that much of the increase in cardiology service demand is a “bubble.” The development of degenerative disease in the aging population speaks to increased volumes, but interventions that have been applied to younger patients may not be applicable or beneficial to the aged.
The Lewin Group report considered a number of solutions to expand the cardiology workforce and meet community needs, including more efficient use of support staff and delaying the retirement of older cardiologists.
The report does not touch on the unlikely possibility of the expansion of training programs. Currently, the number of hospital training slots is limited by the Balanced Budget Act 1997, passed at a time when there was no perceived increased need for specialists. This act fixed the number of training slots by limiting Medicare payments. Hospitals that expand their house staff beyond those limits, would do so at their own expense.
The alternative would be to take on more trainees and have the hospital pay for them. At a time when cardiology is a marketing target, this would not be at all unreasonable.
Even if more slots were provided, it is not certain how many institutions that are not already training cardiology fellows could meet the increased and more stringent quality, volume, and faculty requirements established by the Accreditation Council for General Medical Education.
One proposal, which could increase the number of clinical cardiologists in short supply, was to shorten the training programs for general cardiologists by telescoping their training into the last year of general medicine training. That proposal died a slow death at the hands of the ACGME.
Thus, it is clear that in the current political environment, an increase in cardiology workforce is unlikely.
However, much of what we see as our prime therapeutic domain—hypertension, angina, and heart failure—is treated predominately by noncardiologists. To provide the community we need, we may have to develop a more integrated and collegial relationship with the general internist and family physician. The solution for cardiologists to meet the clinical needs lies in our more efficient integration of cardiology into the general medical community.
The nature and distribution of the cardiology workforce has been at issue for the American College of Cardiology for more than 20 years. It impacts the college's ability to meet community requirements for quality care, and it affects the income of cardiologists.
On the basis of current projections, there will be a major shortfall of cardiologists in 2025, according to a recent Lewin Group report. To some extent, the shortage will affect interventional cardiologists, but the major impact will be felt in the ranks of general cardiologists like me, where the current shortage of 1,600 will swell to 16,000 by 2025.
Projecting the future is, at best, uncertain. These predictions are based on two measurements: first, that there are 1.8 applicants for every cardiovascular training slot, and second, that based on ACC academic, pediatric, and private practice surveys, there is “substantial excess demand for new cardiologists which cannot be met with the current number of fellows completing training annually.” The foundation of these estimates is open to some question.
Not too long ago, the size of the cardiovascular workforce was deemed adequate, with the expectation that interposition of managed care in the medical marketplace would limit patients' ability to get onto our appointment schedules. But then, managed care went up in smoke, and interventional cardiologists found vascular targets not only in the coronary bed but in the head and the legs, not to mention the aortic and mitral valves.
What the future holds for our specialty is uncertain, but if health care reform actually happens and the millions of the uninsured begin to seek medical care, one could anticipate an increase in demand for cardiology services. But in reality, most of people in need of our cardiac services are already covered by Medicare.
Also, it is worth considering that much of the increase in cardiology service demand is a “bubble.” The development of degenerative disease in the aging population speaks to increased volumes, but interventions that have been applied to younger patients may not be applicable or beneficial to the aged.
The Lewin Group report considered a number of solutions to expand the cardiology workforce and meet community needs, including more efficient use of support staff and delaying the retirement of older cardiologists.
The report does not touch on the unlikely possibility of the expansion of training programs. Currently, the number of hospital training slots is limited by the Balanced Budget Act 1997, passed at a time when there was no perceived increased need for specialists. This act fixed the number of training slots by limiting Medicare payments. Hospitals that expand their house staff beyond those limits, would do so at their own expense.
The alternative would be to take on more trainees and have the hospital pay for them. At a time when cardiology is a marketing target, this would not be at all unreasonable.
Even if more slots were provided, it is not certain how many institutions that are not already training cardiology fellows could meet the increased and more stringent quality, volume, and faculty requirements established by the Accreditation Council for General Medical Education.
One proposal, which could increase the number of clinical cardiologists in short supply, was to shorten the training programs for general cardiologists by telescoping their training into the last year of general medicine training. That proposal died a slow death at the hands of the ACGME.
Thus, it is clear that in the current political environment, an increase in cardiology workforce is unlikely.
However, much of what we see as our prime therapeutic domain—hypertension, angina, and heart failure—is treated predominately by noncardiologists. To provide the community we need, we may have to develop a more integrated and collegial relationship with the general internist and family physician. The solution for cardiologists to meet the clinical needs lies in our more efficient integration of cardiology into the general medical community.
The nature and distribution of the cardiology workforce has been at issue for the American College of Cardiology for more than 20 years. It impacts the college's ability to meet community requirements for quality care, and it affects the income of cardiologists.
On the basis of current projections, there will be a major shortfall of cardiologists in 2025, according to a recent Lewin Group report. To some extent, the shortage will affect interventional cardiologists, but the major impact will be felt in the ranks of general cardiologists like me, where the current shortage of 1,600 will swell to 16,000 by 2025.
Projecting the future is, at best, uncertain. These predictions are based on two measurements: first, that there are 1.8 applicants for every cardiovascular training slot, and second, that based on ACC academic, pediatric, and private practice surveys, there is “substantial excess demand for new cardiologists which cannot be met with the current number of fellows completing training annually.” The foundation of these estimates is open to some question.
Not too long ago, the size of the cardiovascular workforce was deemed adequate, with the expectation that interposition of managed care in the medical marketplace would limit patients' ability to get onto our appointment schedules. But then, managed care went up in smoke, and interventional cardiologists found vascular targets not only in the coronary bed but in the head and the legs, not to mention the aortic and mitral valves.
What the future holds for our specialty is uncertain, but if health care reform actually happens and the millions of the uninsured begin to seek medical care, one could anticipate an increase in demand for cardiology services. But in reality, most of people in need of our cardiac services are already covered by Medicare.
Also, it is worth considering that much of the increase in cardiology service demand is a “bubble.” The development of degenerative disease in the aging population speaks to increased volumes, but interventions that have been applied to younger patients may not be applicable or beneficial to the aged.
The Lewin Group report considered a number of solutions to expand the cardiology workforce and meet community needs, including more efficient use of support staff and delaying the retirement of older cardiologists.
The report does not touch on the unlikely possibility of the expansion of training programs. Currently, the number of hospital training slots is limited by the Balanced Budget Act 1997, passed at a time when there was no perceived increased need for specialists. This act fixed the number of training slots by limiting Medicare payments. Hospitals that expand their house staff beyond those limits, would do so at their own expense.
The alternative would be to take on more trainees and have the hospital pay for them. At a time when cardiology is a marketing target, this would not be at all unreasonable.
Even if more slots were provided, it is not certain how many institutions that are not already training cardiology fellows could meet the increased and more stringent quality, volume, and faculty requirements established by the Accreditation Council for General Medical Education.
One proposal, which could increase the number of clinical cardiologists in short supply, was to shorten the training programs for general cardiologists by telescoping their training into the last year of general medicine training. That proposal died a slow death at the hands of the ACGME.
Thus, it is clear that in the current political environment, an increase in cardiology workforce is unlikely.
However, much of what we see as our prime therapeutic domain—hypertension, angina, and heart failure—is treated predominately by noncardiologists. To provide the community we need, we may have to develop a more integrated and collegial relationship with the general internist and family physician. The solution for cardiologists to meet the clinical needs lies in our more efficient integration of cardiology into the general medical community.
Mission: Lifeline
The American Heart Association is leading a major national effort to improve and expedite treatment for ST-segment elevation myocardial infarction. With the logo Mission: Lifeline, reminiscent of TV's “Mission: Impossible,” the AHA, in collaboration with other organizations, is developing criteria and certification for members of the STEMI “treatment train,” from emergency medical services through referring hospitals to the hospital that can perform emergency percutaneous coronary intervention 24/7.
To reach the goals of fibrinolytic therapy in less than 30 minutes and PCI therapy within 90 minutes for STEMI, the nation's EMS and hospital referral system must improve. There are excellent community systems that can serve as models, but creating a uniform system is a challenge, given the wide variety of players.
Unlike the European systems, uniform in their configuration and, for the most part, federally funded and very successful at expediting care for STEMI, the U.S. system is a helter-skelter of private and voluntary players, bent on preserving their own priorities. Only 6% of American EMS systems are hospital based. The rest are provided by fire departments, volunteers, and private operators. State governments control EMS operations, which are certified for different levels of care and are variably equipped to deal with cardiac emergencies. Americans use EMS systems in less than 25% of instances to obtain emergency care for STEMI. In spite of extensive public education, Americans do not understand the need for rapid response to chest pain symptoms.
“Although the performance of primary PCI has increased from 18% to 53%” nearly 30% of patients with STEMI fail to receive either fibrinolytic therapy or PCI, said Dr. Alice Jacobs, former president of the AHA, who is leading the Mission: Lifeline effort (Circulation 2007;116:689–92). Most STEMI patients seek medical help at hospitals that are not equipped to perform primary PCI. To expand the number of PCI-approved hospitals, the requirement to have on-site cardiac surgery in such hospitals will have to be dropped. Rapid transfer to PCI hospitals can be achieved if systems are in place to expedite patient transfer or to initiate fibrinolytic therapy when primary PCI is neither feasible nor appropriate. Non-PCI hospitals within the Mission: Lifeline network would be certified as “STEMI referral hospitals” and would create pathways that expedite transfer to “STEMI receiving hospitals.” Most importantly, competency and numerical criteria have been developed for certification as a “STEMI receiving hospital.”
Mission: Lifeline registration of EMS and referring and receiving hospitals is underway. Certification of sites that meet the published criteria is soon to start. Many communities and hospital systems have the infrastructure in place, but urban and rural systems are up against major logistical and political barriers. The AHA is to be applauded for rising to the challenge of the improvement of emergency cardiac care for STEMI. But achieving the goals of Mission: Lifeline nationwide might actually be a “Mission: Impossible.”
For more information, visit www.americanheart.org/missionlifeline
The American Heart Association is leading a major national effort to improve and expedite treatment for ST-segment elevation myocardial infarction. With the logo Mission: Lifeline, reminiscent of TV's “Mission: Impossible,” the AHA, in collaboration with other organizations, is developing criteria and certification for members of the STEMI “treatment train,” from emergency medical services through referring hospitals to the hospital that can perform emergency percutaneous coronary intervention 24/7.
To reach the goals of fibrinolytic therapy in less than 30 minutes and PCI therapy within 90 minutes for STEMI, the nation's EMS and hospital referral system must improve. There are excellent community systems that can serve as models, but creating a uniform system is a challenge, given the wide variety of players.
Unlike the European systems, uniform in their configuration and, for the most part, federally funded and very successful at expediting care for STEMI, the U.S. system is a helter-skelter of private and voluntary players, bent on preserving their own priorities. Only 6% of American EMS systems are hospital based. The rest are provided by fire departments, volunteers, and private operators. State governments control EMS operations, which are certified for different levels of care and are variably equipped to deal with cardiac emergencies. Americans use EMS systems in less than 25% of instances to obtain emergency care for STEMI. In spite of extensive public education, Americans do not understand the need for rapid response to chest pain symptoms.
“Although the performance of primary PCI has increased from 18% to 53%” nearly 30% of patients with STEMI fail to receive either fibrinolytic therapy or PCI, said Dr. Alice Jacobs, former president of the AHA, who is leading the Mission: Lifeline effort (Circulation 2007;116:689–92). Most STEMI patients seek medical help at hospitals that are not equipped to perform primary PCI. To expand the number of PCI-approved hospitals, the requirement to have on-site cardiac surgery in such hospitals will have to be dropped. Rapid transfer to PCI hospitals can be achieved if systems are in place to expedite patient transfer or to initiate fibrinolytic therapy when primary PCI is neither feasible nor appropriate. Non-PCI hospitals within the Mission: Lifeline network would be certified as “STEMI referral hospitals” and would create pathways that expedite transfer to “STEMI receiving hospitals.” Most importantly, competency and numerical criteria have been developed for certification as a “STEMI receiving hospital.”
Mission: Lifeline registration of EMS and referring and receiving hospitals is underway. Certification of sites that meet the published criteria is soon to start. Many communities and hospital systems have the infrastructure in place, but urban and rural systems are up against major logistical and political barriers. The AHA is to be applauded for rising to the challenge of the improvement of emergency cardiac care for STEMI. But achieving the goals of Mission: Lifeline nationwide might actually be a “Mission: Impossible.”
For more information, visit www.americanheart.org/missionlifeline
The American Heart Association is leading a major national effort to improve and expedite treatment for ST-segment elevation myocardial infarction. With the logo Mission: Lifeline, reminiscent of TV's “Mission: Impossible,” the AHA, in collaboration with other organizations, is developing criteria and certification for members of the STEMI “treatment train,” from emergency medical services through referring hospitals to the hospital that can perform emergency percutaneous coronary intervention 24/7.
To reach the goals of fibrinolytic therapy in less than 30 minutes and PCI therapy within 90 minutes for STEMI, the nation's EMS and hospital referral system must improve. There are excellent community systems that can serve as models, but creating a uniform system is a challenge, given the wide variety of players.
Unlike the European systems, uniform in their configuration and, for the most part, federally funded and very successful at expediting care for STEMI, the U.S. system is a helter-skelter of private and voluntary players, bent on preserving their own priorities. Only 6% of American EMS systems are hospital based. The rest are provided by fire departments, volunteers, and private operators. State governments control EMS operations, which are certified for different levels of care and are variably equipped to deal with cardiac emergencies. Americans use EMS systems in less than 25% of instances to obtain emergency care for STEMI. In spite of extensive public education, Americans do not understand the need for rapid response to chest pain symptoms.
“Although the performance of primary PCI has increased from 18% to 53%” nearly 30% of patients with STEMI fail to receive either fibrinolytic therapy or PCI, said Dr. Alice Jacobs, former president of the AHA, who is leading the Mission: Lifeline effort (Circulation 2007;116:689–92). Most STEMI patients seek medical help at hospitals that are not equipped to perform primary PCI. To expand the number of PCI-approved hospitals, the requirement to have on-site cardiac surgery in such hospitals will have to be dropped. Rapid transfer to PCI hospitals can be achieved if systems are in place to expedite patient transfer or to initiate fibrinolytic therapy when primary PCI is neither feasible nor appropriate. Non-PCI hospitals within the Mission: Lifeline network would be certified as “STEMI referral hospitals” and would create pathways that expedite transfer to “STEMI receiving hospitals.” Most importantly, competency and numerical criteria have been developed for certification as a “STEMI receiving hospital.”
Mission: Lifeline registration of EMS and referring and receiving hospitals is underway. Certification of sites that meet the published criteria is soon to start. Many communities and hospital systems have the infrastructure in place, but urban and rural systems are up against major logistical and political barriers. The AHA is to be applauded for rising to the challenge of the improvement of emergency cardiac care for STEMI. But achieving the goals of Mission: Lifeline nationwide might actually be a “Mission: Impossible.”
For more information, visit www.americanheart.org/missionlifeline
A Sea Change in Anticoagulation Therapy
For more than half a century, physicians have been struggling with the seemingly impossible task of dosing vitamin K–dependent anticoagulants ever since they were demonstrated to be beneficial in the treatment of acute myocardial infarction.
At that time, the major risk for acute MI was the development of a pulmonary embolism occurring in the setting of weeks of prescribed absolute bed rest. Physicians, patients, and nurses have dealt with the logistical difficulties in managing the narrow dose range required to achieve maximum benefit while minimizing risk. Dose titration walks a fine line between recurrent embolic stroke and major bleed.
The slow onset of vitamin K antagonists such as warfarin associated with the variability of dose response driven by genetic polymorphism, food ingestion, and interaction with other drugs has made dosing a therapeutic nightmare. In spite of the problems, vitamin K anticoagulant therapy has remained the standard method for preventing thromboembolism in atrial fibrillation and after valve surgery. The process of dose adjustment for warfarin, the most commonly used drug of this class, requires a huge manpower effort.
Over time, attempts to find an alternative therapy have been unsuccessful. Trials comparing the combination of aspirin and clopidogrel with aspirin alone found the combination more effective than aspirin alone, but not as effective as warfarin.
The comparison of subcutaneously administered factor Xa inhibitors fondaparinux and idraparinux with warfarin resulted in fewer emboli but more bleeding. The long-term subcutaneous administration required with low-molecular-weight heparin also was unacceptable. More recently, the direct thrombin inhibitor ximelagatran was found to have a benefit similar to that of warfarin but with an unacceptable incidence of hepatotoxicity.
For the first time, an effective and safe replacement of warfarin has been developed. The direct thrombin inhibitor dabigatran, a cousin of ximelagatran, was shown to be at least as effective as warfarin and associated with fewer bleeding episodes, depending on the dosage, in the 18,000-patient RE-LY study presented in August at the European Society of Cardiology meeting (
Dabigatran, approved in Canada and Europe, is the first of this class of drugs being developed to provide an opportunity to evaluate other direct thrombin inhibitor molecules. In addition to direct thrombin inhibitors, oral factor Xa inhibitors are under intense clinic evaluation. One of these, rivaroxaban, also approved in Canada and Europe, demonstrated efficacy in prevention of venous thromboembolism following major orthopedic surgery (N. Engl. J. Med 2008;358:2776–86). Like dabigatran, it has a rapid onset of action and can be given orally in a fixed dose, comparable to warfarin with less bleeding. Both direct thrombin and factor Xa inhibitors have been tested in orthopedic patients in whom venous thrombosis can be easily identified with venography. They are yet to be tested more widely in other cardiovascular settings. A number of studies with factor Xa inhibitors are underway to evaluate their benefit in acute coronary syndromes in association with antiplatelet therapy. The clinical comparison of these two new classes of anticoagulants will require further definition.
It is clear that these drugs will change the shape of anticoagulant therapy for a variety of cardiovascular conditions. Many patients undergoing atrial fibrillation ablation may find a safe, easily taken oral medication a better alternative to electrophysiologic intervention. The safety and benefit of these new classes of anticoagulants will make it possible for more patients and physicians to adhere to published guidelines once their efficacy is proved. For now, dabigatran represents a major advance in the prevention of thromboembolism in these patients.
For more than half a century, physicians have been struggling with the seemingly impossible task of dosing vitamin K–dependent anticoagulants ever since they were demonstrated to be beneficial in the treatment of acute myocardial infarction.
At that time, the major risk for acute MI was the development of a pulmonary embolism occurring in the setting of weeks of prescribed absolute bed rest. Physicians, patients, and nurses have dealt with the logistical difficulties in managing the narrow dose range required to achieve maximum benefit while minimizing risk. Dose titration walks a fine line between recurrent embolic stroke and major bleed.
The slow onset of vitamin K antagonists such as warfarin associated with the variability of dose response driven by genetic polymorphism, food ingestion, and interaction with other drugs has made dosing a therapeutic nightmare. In spite of the problems, vitamin K anticoagulant therapy has remained the standard method for preventing thromboembolism in atrial fibrillation and after valve surgery. The process of dose adjustment for warfarin, the most commonly used drug of this class, requires a huge manpower effort.
Over time, attempts to find an alternative therapy have been unsuccessful. Trials comparing the combination of aspirin and clopidogrel with aspirin alone found the combination more effective than aspirin alone, but not as effective as warfarin.
The comparison of subcutaneously administered factor Xa inhibitors fondaparinux and idraparinux with warfarin resulted in fewer emboli but more bleeding. The long-term subcutaneous administration required with low-molecular-weight heparin also was unacceptable. More recently, the direct thrombin inhibitor ximelagatran was found to have a benefit similar to that of warfarin but with an unacceptable incidence of hepatotoxicity.
For the first time, an effective and safe replacement of warfarin has been developed. The direct thrombin inhibitor dabigatran, a cousin of ximelagatran, was shown to be at least as effective as warfarin and associated with fewer bleeding episodes, depending on the dosage, in the 18,000-patient RE-LY study presented in August at the European Society of Cardiology meeting (
Dabigatran, approved in Canada and Europe, is the first of this class of drugs being developed to provide an opportunity to evaluate other direct thrombin inhibitor molecules. In addition to direct thrombin inhibitors, oral factor Xa inhibitors are under intense clinic evaluation. One of these, rivaroxaban, also approved in Canada and Europe, demonstrated efficacy in prevention of venous thromboembolism following major orthopedic surgery (N. Engl. J. Med 2008;358:2776–86). Like dabigatran, it has a rapid onset of action and can be given orally in a fixed dose, comparable to warfarin with less bleeding. Both direct thrombin and factor Xa inhibitors have been tested in orthopedic patients in whom venous thrombosis can be easily identified with venography. They are yet to be tested more widely in other cardiovascular settings. A number of studies with factor Xa inhibitors are underway to evaluate their benefit in acute coronary syndromes in association with antiplatelet therapy. The clinical comparison of these two new classes of anticoagulants will require further definition.
It is clear that these drugs will change the shape of anticoagulant therapy for a variety of cardiovascular conditions. Many patients undergoing atrial fibrillation ablation may find a safe, easily taken oral medication a better alternative to electrophysiologic intervention. The safety and benefit of these new classes of anticoagulants will make it possible for more patients and physicians to adhere to published guidelines once their efficacy is proved. For now, dabigatran represents a major advance in the prevention of thromboembolism in these patients.
For more than half a century, physicians have been struggling with the seemingly impossible task of dosing vitamin K–dependent anticoagulants ever since they were demonstrated to be beneficial in the treatment of acute myocardial infarction.
At that time, the major risk for acute MI was the development of a pulmonary embolism occurring in the setting of weeks of prescribed absolute bed rest. Physicians, patients, and nurses have dealt with the logistical difficulties in managing the narrow dose range required to achieve maximum benefit while minimizing risk. Dose titration walks a fine line between recurrent embolic stroke and major bleed.
The slow onset of vitamin K antagonists such as warfarin associated with the variability of dose response driven by genetic polymorphism, food ingestion, and interaction with other drugs has made dosing a therapeutic nightmare. In spite of the problems, vitamin K anticoagulant therapy has remained the standard method for preventing thromboembolism in atrial fibrillation and after valve surgery. The process of dose adjustment for warfarin, the most commonly used drug of this class, requires a huge manpower effort.
Over time, attempts to find an alternative therapy have been unsuccessful. Trials comparing the combination of aspirin and clopidogrel with aspirin alone found the combination more effective than aspirin alone, but not as effective as warfarin.
The comparison of subcutaneously administered factor Xa inhibitors fondaparinux and idraparinux with warfarin resulted in fewer emboli but more bleeding. The long-term subcutaneous administration required with low-molecular-weight heparin also was unacceptable. More recently, the direct thrombin inhibitor ximelagatran was found to have a benefit similar to that of warfarin but with an unacceptable incidence of hepatotoxicity.
For the first time, an effective and safe replacement of warfarin has been developed. The direct thrombin inhibitor dabigatran, a cousin of ximelagatran, was shown to be at least as effective as warfarin and associated with fewer bleeding episodes, depending on the dosage, in the 18,000-patient RE-LY study presented in August at the European Society of Cardiology meeting (
Dabigatran, approved in Canada and Europe, is the first of this class of drugs being developed to provide an opportunity to evaluate other direct thrombin inhibitor molecules. In addition to direct thrombin inhibitors, oral factor Xa inhibitors are under intense clinic evaluation. One of these, rivaroxaban, also approved in Canada and Europe, demonstrated efficacy in prevention of venous thromboembolism following major orthopedic surgery (N. Engl. J. Med 2008;358:2776–86). Like dabigatran, it has a rapid onset of action and can be given orally in a fixed dose, comparable to warfarin with less bleeding. Both direct thrombin and factor Xa inhibitors have been tested in orthopedic patients in whom venous thrombosis can be easily identified with venography. They are yet to be tested more widely in other cardiovascular settings. A number of studies with factor Xa inhibitors are underway to evaluate their benefit in acute coronary syndromes in association with antiplatelet therapy. The clinical comparison of these two new classes of anticoagulants will require further definition.
It is clear that these drugs will change the shape of anticoagulant therapy for a variety of cardiovascular conditions. Many patients undergoing atrial fibrillation ablation may find a safe, easily taken oral medication a better alternative to electrophysiologic intervention. The safety and benefit of these new classes of anticoagulants will make it possible for more patients and physicians to adhere to published guidelines once their efficacy is proved. For now, dabigatran represents a major advance in the prevention of thromboembolism in these patients.
Black Boxes
The Food and Drug Administration is abdicating some of its responsibility to you and me by placing boxed warnings on its approvals for potent drugs in the face of difficult decisions regarding their risks and benefits. In the past, the approval process has been straightforward, but as we develop stronger drugs with significant risks, it has become much more complex.
Two recent approvals serve as examples. Dronedarone is a recently approved anti-arrhythmic drug developed to replace amiodarone for the maintenance of normal sinus rhythm in patients with paroxysmal atrial fibrillation. Because of the long-term effects of amiodarone on thyroid function and lung toxicity, dronedarone was developed by making structural changes in the amiodarone molecule to prevent that toxicity. Amiodarone has been effective in maintaining normal sinus rhythm, but with a trend toward increased mortality in New York Heart Association class III heart failure patients. The initial clinical trial with dronedarone, ANDROMEDA, carried out in NYHA class III-IV patients, was stopped prematurely because of the increased heart failure mortality.
ATHENA, a later short-term trial in patients with at least one episode of paroxysmal AF in the previous 3 months with dronedarone, reported a significant decrease in atrial fibrillation compared with placebo. It excluded NYHA class IV patients and those with chronic AF (N. Engl. J. Med. 2009;360:668-78). Dronedarone had a significant improvement in mortality and recurrent hospitalization (36.9% vs. 29.3%), compared with placebo. It also decreased the rehospitalization for AF from 21.8% to 14.6%. There was no ascertainment of amiodarone-like side effects because of ATHENA's short duration. Since it did not compare dronedarone with amiodarone, it is not clear which drug has a better anti-arrhythmic effect.
The FDA approval of dronedarone for the prevention of AF came with a black box warning that it “is contraindicated in patients with NYHA Class IV heart failure, or NYHA Class II-III heart failure with a recent decompensation requiring hospitalization or referral to a specialized heart failure clinic.” The FDA left it up to us to decide when to use this drug in heart failure, a moving target at best.
Shortly after this decision, the FDA approved the use of the antiplatelet agent prasugrel, for use in patients with acute coronary syndromes who were likely to undergo percutaneous coronary intervention. In the TRITON trial, comparing clopidogrel with prasugrel in ACS, prasugrel was shown to have a greater benefit for recurrent MI, cardiovascular mortality, or nonfatal stroke (12.1% vs. 9.9%), a result largely driven by “troponin-defined” non-fatal MIs. However, prasugrel was associated with an increased incidence of bleeding and thrombotic strokes about five times that of clopidogrel (6.5% vs. 1.2%), particularly in thin and elderly patients. With this information, the FDA approved prasugrel for the reduction of thrombotic cardiovascular events in ACS patients who are managed with PCI. The approval came with a black box warning that cautioned against its use in patients with a propensity to bleed and in patients aged older than 75 years, or with body weight less than 60 kg.
The current discussion of the approval process of these two drugs, whose therapeutic and safety benefits are narrow, indicates an awareness of the cautions. Over time, the black box warnings lose some of their impact and may become less important in our therapeutic decisions. It has been suggested that with the more liberal use of these warnings, they have lost some of their meaning.
There is nothing wrong with the FDA's passing on the drug-use decision process to the doctors on the front line, but it should be emphasized that many of these drugs come with a significant risk if used in the wrong patient. Doctors beware!
The Food and Drug Administration is abdicating some of its responsibility to you and me by placing boxed warnings on its approvals for potent drugs in the face of difficult decisions regarding their risks and benefits. In the past, the approval process has been straightforward, but as we develop stronger drugs with significant risks, it has become much more complex.
Two recent approvals serve as examples. Dronedarone is a recently approved anti-arrhythmic drug developed to replace amiodarone for the maintenance of normal sinus rhythm in patients with paroxysmal atrial fibrillation. Because of the long-term effects of amiodarone on thyroid function and lung toxicity, dronedarone was developed by making structural changes in the amiodarone molecule to prevent that toxicity. Amiodarone has been effective in maintaining normal sinus rhythm, but with a trend toward increased mortality in New York Heart Association class III heart failure patients. The initial clinical trial with dronedarone, ANDROMEDA, carried out in NYHA class III-IV patients, was stopped prematurely because of the increased heart failure mortality.
ATHENA, a later short-term trial in patients with at least one episode of paroxysmal AF in the previous 3 months with dronedarone, reported a significant decrease in atrial fibrillation compared with placebo. It excluded NYHA class IV patients and those with chronic AF (N. Engl. J. Med. 2009;360:668-78). Dronedarone had a significant improvement in mortality and recurrent hospitalization (36.9% vs. 29.3%), compared with placebo. It also decreased the rehospitalization for AF from 21.8% to 14.6%. There was no ascertainment of amiodarone-like side effects because of ATHENA's short duration. Since it did not compare dronedarone with amiodarone, it is not clear which drug has a better anti-arrhythmic effect.
The FDA approval of dronedarone for the prevention of AF came with a black box warning that it “is contraindicated in patients with NYHA Class IV heart failure, or NYHA Class II-III heart failure with a recent decompensation requiring hospitalization or referral to a specialized heart failure clinic.” The FDA left it up to us to decide when to use this drug in heart failure, a moving target at best.
Shortly after this decision, the FDA approved the use of the antiplatelet agent prasugrel, for use in patients with acute coronary syndromes who were likely to undergo percutaneous coronary intervention. In the TRITON trial, comparing clopidogrel with prasugrel in ACS, prasugrel was shown to have a greater benefit for recurrent MI, cardiovascular mortality, or nonfatal stroke (12.1% vs. 9.9%), a result largely driven by “troponin-defined” non-fatal MIs. However, prasugrel was associated with an increased incidence of bleeding and thrombotic strokes about five times that of clopidogrel (6.5% vs. 1.2%), particularly in thin and elderly patients. With this information, the FDA approved prasugrel for the reduction of thrombotic cardiovascular events in ACS patients who are managed with PCI. The approval came with a black box warning that cautioned against its use in patients with a propensity to bleed and in patients aged older than 75 years, or with body weight less than 60 kg.
The current discussion of the approval process of these two drugs, whose therapeutic and safety benefits are narrow, indicates an awareness of the cautions. Over time, the black box warnings lose some of their impact and may become less important in our therapeutic decisions. It has been suggested that with the more liberal use of these warnings, they have lost some of their meaning.
There is nothing wrong with the FDA's passing on the drug-use decision process to the doctors on the front line, but it should be emphasized that many of these drugs come with a significant risk if used in the wrong patient. Doctors beware!
The Food and Drug Administration is abdicating some of its responsibility to you and me by placing boxed warnings on its approvals for potent drugs in the face of difficult decisions regarding their risks and benefits. In the past, the approval process has been straightforward, but as we develop stronger drugs with significant risks, it has become much more complex.
Two recent approvals serve as examples. Dronedarone is a recently approved anti-arrhythmic drug developed to replace amiodarone for the maintenance of normal sinus rhythm in patients with paroxysmal atrial fibrillation. Because of the long-term effects of amiodarone on thyroid function and lung toxicity, dronedarone was developed by making structural changes in the amiodarone molecule to prevent that toxicity. Amiodarone has been effective in maintaining normal sinus rhythm, but with a trend toward increased mortality in New York Heart Association class III heart failure patients. The initial clinical trial with dronedarone, ANDROMEDA, carried out in NYHA class III-IV patients, was stopped prematurely because of the increased heart failure mortality.
ATHENA, a later short-term trial in patients with at least one episode of paroxysmal AF in the previous 3 months with dronedarone, reported a significant decrease in atrial fibrillation compared with placebo. It excluded NYHA class IV patients and those with chronic AF (N. Engl. J. Med. 2009;360:668-78). Dronedarone had a significant improvement in mortality and recurrent hospitalization (36.9% vs. 29.3%), compared with placebo. It also decreased the rehospitalization for AF from 21.8% to 14.6%. There was no ascertainment of amiodarone-like side effects because of ATHENA's short duration. Since it did not compare dronedarone with amiodarone, it is not clear which drug has a better anti-arrhythmic effect.
The FDA approval of dronedarone for the prevention of AF came with a black box warning that it “is contraindicated in patients with NYHA Class IV heart failure, or NYHA Class II-III heart failure with a recent decompensation requiring hospitalization or referral to a specialized heart failure clinic.” The FDA left it up to us to decide when to use this drug in heart failure, a moving target at best.
Shortly after this decision, the FDA approved the use of the antiplatelet agent prasugrel, for use in patients with acute coronary syndromes who were likely to undergo percutaneous coronary intervention. In the TRITON trial, comparing clopidogrel with prasugrel in ACS, prasugrel was shown to have a greater benefit for recurrent MI, cardiovascular mortality, or nonfatal stroke (12.1% vs. 9.9%), a result largely driven by “troponin-defined” non-fatal MIs. However, prasugrel was associated with an increased incidence of bleeding and thrombotic strokes about five times that of clopidogrel (6.5% vs. 1.2%), particularly in thin and elderly patients. With this information, the FDA approved prasugrel for the reduction of thrombotic cardiovascular events in ACS patients who are managed with PCI. The approval came with a black box warning that cautioned against its use in patients with a propensity to bleed and in patients aged older than 75 years, or with body weight less than 60 kg.
The current discussion of the approval process of these two drugs, whose therapeutic and safety benefits are narrow, indicates an awareness of the cautions. Over time, the black box warnings lose some of their impact and may become less important in our therapeutic decisions. It has been suggested that with the more liberal use of these warnings, they have lost some of their meaning.
There is nothing wrong with the FDA's passing on the drug-use decision process to the doctors on the front line, but it should be emphasized that many of these drugs come with a significant risk if used in the wrong patient. Doctors beware!
Health Insurance For Everyone
As I write this column, the House of Representatives is struggling with the details of the new health care legislation, the American Affordable Health Choices Act of 2009 (H.R. 3200). By the time you read this, Congress will be in recess, having left the final form and fate of the legislation uncertain.
The goal of providing universal health insurance is ambitious, and to achieve that goal while limiting its cost is a monumental task. If achieved, it will represent a sea change in the way America pays for health coverage, just as Medicare transformed the care of the over-65 population 34 years ago. The proposed legislation will totally rewrite the rules of how Americans receive their care and how we physicians provide that care in an expanded government-controlled health care system.
The government is already heavily involved in providing health care to a third of the U.S. population. Medicare, which has been very successful in the eyes of the elderly patients if not a darling of doctors, covers more than 45 million Americans. Medicaid jointly financed by the federal and state governments covers another 60 million poor people. It would seem obvious to expand that program to include the remaining two-thirds.
For a public plan to be viable, it must include an individual mandate to participate. Everyone must be included. If not, the healthy patient by design or by personal desire will be left out and only the sickest and the most expensive will be left in the public plan. A personal mandate will require the government to subsidize the poor, just as it has in the past, but in a more organized system rather than the expensive and dysfunctional care in the emergency department.
Without a universal health plan in place, the ranks of the uninsured will continue to increase. Almost 50 million Americans are currently uninsured, representing well over 15% of our population. When Americans who are out of work in the current recession return to the workforce, the post-recession health insurance policies will not resemble the pre-recession policies. Many of the benefits will disappear and the copayments will most certainly increase. As the pool of insured patients decreases there will be increased competition among health care providers for those few insured patients. We see this already, with hospital and physicians advertising on TV and in the print media. Caught between increased cost and falling profits, insurers will have to choose between increased premiums or cutting doctors fees. The day may come when Medicare's physician fee schedule could be a welcome lifeboat for physicians' practices.
In his response to President Obama's address to the American Medical Association in June, AMA's leadership echoed the need for universal care, but indicated that a payment schedule based on Medicare rates was unacceptable. The AMA has been reluctant to articulate just what sort of universal health insurance plan is acceptable, public or private.
Nevertheless, the process of achieving a federal insurance plan has been much different than that of the Clinton plan in 1993. This time there has been a significant bargaining between Congress and the health providers. The word has been sent out by the Democratic Congress that if you want to take part in the formative process, negative advertising and publicity will lock you out. So at this point everyone is in until they are out. The pharmaceutical industry has indicated that it will provide an $80 billion savings over a 10-year period to the elderly if it limits the reduction in drug coverage to Medicare recipients trapped in the Part D “doughnut hole.” The major hospital associations pledged to save $150 billion dollars providing that the new public plan will cover the indigent. And the AMA has agreed to support HR 3200.
There is considerable concern about cost, which is now estimated at $900 million over the next 10 years. Congress hardly blinked an eye when they spent well over that in a useless war in Iraq. The social and economic necessity of a universal health plan is obvious. To achieve that, a robust universal public insurance foundation is essential. Any thing short of that will lead to further deterioration in American health care.
As I write this column, the House of Representatives is struggling with the details of the new health care legislation, the American Affordable Health Choices Act of 2009 (H.R. 3200). By the time you read this, Congress will be in recess, having left the final form and fate of the legislation uncertain.
The goal of providing universal health insurance is ambitious, and to achieve that goal while limiting its cost is a monumental task. If achieved, it will represent a sea change in the way America pays for health coverage, just as Medicare transformed the care of the over-65 population 34 years ago. The proposed legislation will totally rewrite the rules of how Americans receive their care and how we physicians provide that care in an expanded government-controlled health care system.
The government is already heavily involved in providing health care to a third of the U.S. population. Medicare, which has been very successful in the eyes of the elderly patients if not a darling of doctors, covers more than 45 million Americans. Medicaid jointly financed by the federal and state governments covers another 60 million poor people. It would seem obvious to expand that program to include the remaining two-thirds.
For a public plan to be viable, it must include an individual mandate to participate. Everyone must be included. If not, the healthy patient by design or by personal desire will be left out and only the sickest and the most expensive will be left in the public plan. A personal mandate will require the government to subsidize the poor, just as it has in the past, but in a more organized system rather than the expensive and dysfunctional care in the emergency department.
Without a universal health plan in place, the ranks of the uninsured will continue to increase. Almost 50 million Americans are currently uninsured, representing well over 15% of our population. When Americans who are out of work in the current recession return to the workforce, the post-recession health insurance policies will not resemble the pre-recession policies. Many of the benefits will disappear and the copayments will most certainly increase. As the pool of insured patients decreases there will be increased competition among health care providers for those few insured patients. We see this already, with hospital and physicians advertising on TV and in the print media. Caught between increased cost and falling profits, insurers will have to choose between increased premiums or cutting doctors fees. The day may come when Medicare's physician fee schedule could be a welcome lifeboat for physicians' practices.
In his response to President Obama's address to the American Medical Association in June, AMA's leadership echoed the need for universal care, but indicated that a payment schedule based on Medicare rates was unacceptable. The AMA has been reluctant to articulate just what sort of universal health insurance plan is acceptable, public or private.
Nevertheless, the process of achieving a federal insurance plan has been much different than that of the Clinton plan in 1993. This time there has been a significant bargaining between Congress and the health providers. The word has been sent out by the Democratic Congress that if you want to take part in the formative process, negative advertising and publicity will lock you out. So at this point everyone is in until they are out. The pharmaceutical industry has indicated that it will provide an $80 billion savings over a 10-year period to the elderly if it limits the reduction in drug coverage to Medicare recipients trapped in the Part D “doughnut hole.” The major hospital associations pledged to save $150 billion dollars providing that the new public plan will cover the indigent. And the AMA has agreed to support HR 3200.
There is considerable concern about cost, which is now estimated at $900 million over the next 10 years. Congress hardly blinked an eye when they spent well over that in a useless war in Iraq. The social and economic necessity of a universal health plan is obvious. To achieve that, a robust universal public insurance foundation is essential. Any thing short of that will lead to further deterioration in American health care.
As I write this column, the House of Representatives is struggling with the details of the new health care legislation, the American Affordable Health Choices Act of 2009 (H.R. 3200). By the time you read this, Congress will be in recess, having left the final form and fate of the legislation uncertain.
The goal of providing universal health insurance is ambitious, and to achieve that goal while limiting its cost is a monumental task. If achieved, it will represent a sea change in the way America pays for health coverage, just as Medicare transformed the care of the over-65 population 34 years ago. The proposed legislation will totally rewrite the rules of how Americans receive their care and how we physicians provide that care in an expanded government-controlled health care system.
The government is already heavily involved in providing health care to a third of the U.S. population. Medicare, which has been very successful in the eyes of the elderly patients if not a darling of doctors, covers more than 45 million Americans. Medicaid jointly financed by the federal and state governments covers another 60 million poor people. It would seem obvious to expand that program to include the remaining two-thirds.
For a public plan to be viable, it must include an individual mandate to participate. Everyone must be included. If not, the healthy patient by design or by personal desire will be left out and only the sickest and the most expensive will be left in the public plan. A personal mandate will require the government to subsidize the poor, just as it has in the past, but in a more organized system rather than the expensive and dysfunctional care in the emergency department.
Without a universal health plan in place, the ranks of the uninsured will continue to increase. Almost 50 million Americans are currently uninsured, representing well over 15% of our population. When Americans who are out of work in the current recession return to the workforce, the post-recession health insurance policies will not resemble the pre-recession policies. Many of the benefits will disappear and the copayments will most certainly increase. As the pool of insured patients decreases there will be increased competition among health care providers for those few insured patients. We see this already, with hospital and physicians advertising on TV and in the print media. Caught between increased cost and falling profits, insurers will have to choose between increased premiums or cutting doctors fees. The day may come when Medicare's physician fee schedule could be a welcome lifeboat for physicians' practices.
In his response to President Obama's address to the American Medical Association in June, AMA's leadership echoed the need for universal care, but indicated that a payment schedule based on Medicare rates was unacceptable. The AMA has been reluctant to articulate just what sort of universal health insurance plan is acceptable, public or private.
Nevertheless, the process of achieving a federal insurance plan has been much different than that of the Clinton plan in 1993. This time there has been a significant bargaining between Congress and the health providers. The word has been sent out by the Democratic Congress that if you want to take part in the formative process, negative advertising and publicity will lock you out. So at this point everyone is in until they are out. The pharmaceutical industry has indicated that it will provide an $80 billion savings over a 10-year period to the elderly if it limits the reduction in drug coverage to Medicare recipients trapped in the Part D “doughnut hole.” The major hospital associations pledged to save $150 billion dollars providing that the new public plan will cover the indigent. And the AMA has agreed to support HR 3200.
There is considerable concern about cost, which is now estimated at $900 million over the next 10 years. Congress hardly blinked an eye when they spent well over that in a useless war in Iraq. The social and economic necessity of a universal health plan is obvious. To achieve that, a robust universal public insurance foundation is essential. Any thing short of that will lead to further deterioration in American health care.
Who Runs the CCU?
The development of the coronary care unit in the mid-1960s was a seminal event in clinical medicine. It recognized the gravity of the first few hours and days of an acute myocardial infarction and revealed a dimension of pathology previously unknown to the clinician.
These observations led to an expansion of clinical research and therapy in cardiology, which continues today. Patients with acute myocardial infarction, a clinical event first described by Dr. James Herrick in 1912, were well known. But it was not until the opening of CCUs in medical centers in the United States and England that we began to fully understand the clinical events that resulted from coronary artery thrombosis. The CCU was the launching pad from which that research evolved over the next half century.
Initially, the CCU was largely an arrhythmia-monitoring unit, but it soon became a clinical laboratory aimed at the recognition of left ventricular failure and homodynamic instability based on monitoring of cardiac function with the Swan-Ganz catheter. It became the site where we first examine the role of catecholamines and vasodilators in the treatment of hypotension and shock.
The CCU has changed significantly since then.
The spectrum of cardiac pathology has broadened with the development of biomarkers that expanded our understanding of the early expression of ischemia. These biomarker determinations identified the previously unrecognized magnitude of coronary ischemia. Cardiologists became more interested in the acute coronary syndromes and early angiographic expression of disease.
As a result, the CCU is now largely the repository of complicated ST-segment elevation myocardial infarctions, post complex percutaneous coronary intervention, and the treatment of patients with homodynamic instability and left ventricular failure.
The CCU has also blended into the hospital complex of intensive care units. In many institutions, the boundary between ICU and CCU has become blurred beyond recognition.
During a recent rounding rotation on our consult service, I was struck by the expansion of ICU beds in our institution and the role that the intensivist plays in the administration and care of patients in these units. The management of a broad spectrum of diseases, from pulmonary failure to postoperative neurosurgical problems, is no longer the responsibility of the medical discipline of origin. It is assigned instead to the domain of the generic intensivist once the patient enters the ICU.
The same pressures to provide round-the-clock care have led to the gradual invasion of the CCU by the ubiquitous intensivist. Health planners, including one of the leaders in the reinvention of health care, the Leapfrog Group, has proposed that all intensive care units, including the coronary care unit, should be under the control of a resident intensivist, who often doubles as a hospitalist. They point to studies that show an improvement in ICU mortality by up to 40% in such units (www.leapfroggroup.org/about_us/leapfrog-factsheet
The American College of Chest Physicians and the Committee on Manpower of Pulmonary and Critical Care Societies have led the expansion of the role of the intensivist. In a recent report to Congress, the groups specifically emphasized the short supply of intensivist and their important role in the care of ICU patients (Senate Report 108-81).
Generic use of intensivists in ICUs because of their round-the-clock availability in the hospital is not necessarily a step forward. There is no question that immediate physician availability is essential to the care of the critically ill patient. But the physician best equipped to render this care is the one who is trained to deal with that specialty. To fulfill our responsibility for cardiac care, we must provide more CCU experience during cardiology training. Those challenges are outlined in an excellent editorial by Dr. Jason Katz and colleagues that emphasizes the need for intensivist training in cardiology programs (J. Am. Coll. Cardiol. 2007;49:1279).
There is reason to be concerned that training has become subservient to the demands of technologies that are more lucrative but less than supportive of our role as cardiologists. In order for cardiologists to render quality care in the future, more CCU experience is essential in our training programs.
The CCU remains an essential clinical laboratory for the care of the cardiac patient and we must maintain our role in that environment.
The development of the coronary care unit in the mid-1960s was a seminal event in clinical medicine. It recognized the gravity of the first few hours and days of an acute myocardial infarction and revealed a dimension of pathology previously unknown to the clinician.
These observations led to an expansion of clinical research and therapy in cardiology, which continues today. Patients with acute myocardial infarction, a clinical event first described by Dr. James Herrick in 1912, were well known. But it was not until the opening of CCUs in medical centers in the United States and England that we began to fully understand the clinical events that resulted from coronary artery thrombosis. The CCU was the launching pad from which that research evolved over the next half century.
Initially, the CCU was largely an arrhythmia-monitoring unit, but it soon became a clinical laboratory aimed at the recognition of left ventricular failure and homodynamic instability based on monitoring of cardiac function with the Swan-Ganz catheter. It became the site where we first examine the role of catecholamines and vasodilators in the treatment of hypotension and shock.
The CCU has changed significantly since then.
The spectrum of cardiac pathology has broadened with the development of biomarkers that expanded our understanding of the early expression of ischemia. These biomarker determinations identified the previously unrecognized magnitude of coronary ischemia. Cardiologists became more interested in the acute coronary syndromes and early angiographic expression of disease.
As a result, the CCU is now largely the repository of complicated ST-segment elevation myocardial infarctions, post complex percutaneous coronary intervention, and the treatment of patients with homodynamic instability and left ventricular failure.
The CCU has also blended into the hospital complex of intensive care units. In many institutions, the boundary between ICU and CCU has become blurred beyond recognition.
During a recent rounding rotation on our consult service, I was struck by the expansion of ICU beds in our institution and the role that the intensivist plays in the administration and care of patients in these units. The management of a broad spectrum of diseases, from pulmonary failure to postoperative neurosurgical problems, is no longer the responsibility of the medical discipline of origin. It is assigned instead to the domain of the generic intensivist once the patient enters the ICU.
The same pressures to provide round-the-clock care have led to the gradual invasion of the CCU by the ubiquitous intensivist. Health planners, including one of the leaders in the reinvention of health care, the Leapfrog Group, has proposed that all intensive care units, including the coronary care unit, should be under the control of a resident intensivist, who often doubles as a hospitalist. They point to studies that show an improvement in ICU mortality by up to 40% in such units (www.leapfroggroup.org/about_us/leapfrog-factsheet
The American College of Chest Physicians and the Committee on Manpower of Pulmonary and Critical Care Societies have led the expansion of the role of the intensivist. In a recent report to Congress, the groups specifically emphasized the short supply of intensivist and their important role in the care of ICU patients (Senate Report 108-81).
Generic use of intensivists in ICUs because of their round-the-clock availability in the hospital is not necessarily a step forward. There is no question that immediate physician availability is essential to the care of the critically ill patient. But the physician best equipped to render this care is the one who is trained to deal with that specialty. To fulfill our responsibility for cardiac care, we must provide more CCU experience during cardiology training. Those challenges are outlined in an excellent editorial by Dr. Jason Katz and colleagues that emphasizes the need for intensivist training in cardiology programs (J. Am. Coll. Cardiol. 2007;49:1279).
There is reason to be concerned that training has become subservient to the demands of technologies that are more lucrative but less than supportive of our role as cardiologists. In order for cardiologists to render quality care in the future, more CCU experience is essential in our training programs.
The CCU remains an essential clinical laboratory for the care of the cardiac patient and we must maintain our role in that environment.
The development of the coronary care unit in the mid-1960s was a seminal event in clinical medicine. It recognized the gravity of the first few hours and days of an acute myocardial infarction and revealed a dimension of pathology previously unknown to the clinician.
These observations led to an expansion of clinical research and therapy in cardiology, which continues today. Patients with acute myocardial infarction, a clinical event first described by Dr. James Herrick in 1912, were well known. But it was not until the opening of CCUs in medical centers in the United States and England that we began to fully understand the clinical events that resulted from coronary artery thrombosis. The CCU was the launching pad from which that research evolved over the next half century.
Initially, the CCU was largely an arrhythmia-monitoring unit, but it soon became a clinical laboratory aimed at the recognition of left ventricular failure and homodynamic instability based on monitoring of cardiac function with the Swan-Ganz catheter. It became the site where we first examine the role of catecholamines and vasodilators in the treatment of hypotension and shock.
The CCU has changed significantly since then.
The spectrum of cardiac pathology has broadened with the development of biomarkers that expanded our understanding of the early expression of ischemia. These biomarker determinations identified the previously unrecognized magnitude of coronary ischemia. Cardiologists became more interested in the acute coronary syndromes and early angiographic expression of disease.
As a result, the CCU is now largely the repository of complicated ST-segment elevation myocardial infarctions, post complex percutaneous coronary intervention, and the treatment of patients with homodynamic instability and left ventricular failure.
The CCU has also blended into the hospital complex of intensive care units. In many institutions, the boundary between ICU and CCU has become blurred beyond recognition.
During a recent rounding rotation on our consult service, I was struck by the expansion of ICU beds in our institution and the role that the intensivist plays in the administration and care of patients in these units. The management of a broad spectrum of diseases, from pulmonary failure to postoperative neurosurgical problems, is no longer the responsibility of the medical discipline of origin. It is assigned instead to the domain of the generic intensivist once the patient enters the ICU.
The same pressures to provide round-the-clock care have led to the gradual invasion of the CCU by the ubiquitous intensivist. Health planners, including one of the leaders in the reinvention of health care, the Leapfrog Group, has proposed that all intensive care units, including the coronary care unit, should be under the control of a resident intensivist, who often doubles as a hospitalist. They point to studies that show an improvement in ICU mortality by up to 40% in such units (www.leapfroggroup.org/about_us/leapfrog-factsheet
The American College of Chest Physicians and the Committee on Manpower of Pulmonary and Critical Care Societies have led the expansion of the role of the intensivist. In a recent report to Congress, the groups specifically emphasized the short supply of intensivist and their important role in the care of ICU patients (Senate Report 108-81).
Generic use of intensivists in ICUs because of their round-the-clock availability in the hospital is not necessarily a step forward. There is no question that immediate physician availability is essential to the care of the critically ill patient. But the physician best equipped to render this care is the one who is trained to deal with that specialty. To fulfill our responsibility for cardiac care, we must provide more CCU experience during cardiology training. Those challenges are outlined in an excellent editorial by Dr. Jason Katz and colleagues that emphasizes the need for intensivist training in cardiology programs (J. Am. Coll. Cardiol. 2007;49:1279).
There is reason to be concerned that training has become subservient to the demands of technologies that are more lucrative but less than supportive of our role as cardiologists. In order for cardiologists to render quality care in the future, more CCU experience is essential in our training programs.
The CCU remains an essential clinical laboratory for the care of the cardiac patient and we must maintain our role in that environment.