User login
The Corporation Cardiologist
In bygone days, your community hospital was a place where babies were born and gallbladders were removed. They were often run by city governments or local religious organizations. Of course, a lot has changed since then. Now your hospital advertises on television and extols the viewer about the medical miracles that are performed inside its walls. They also are getting bigger and merging with smaller and occasionally coequal institutions in the name of efficiency and in the effort to expand their patient catchment.
In an even larger sense, hospitals have been gobbled up by insurance companies and by for-profit networks in an attempt to maximize profits and minimize overhead. Some prestigious hospitals like the Mayo Clinic, the Cleveland Clinic, and MD Anderson Cancer Center have even established affiliations with community hospitals thousands of miles away seemingly to improve local care and at the same time to expand their referral network. Where local hospitals are not adequate and the market beckons, some have even built their own facilities not only in the United States, but also in countries around the globe.
The intent of these network affiliations is not only to improve their image but to impart some of their prestige to the local entities as well. As a result of these mergers and consolidations, they are positioning themselves to be more competitive in the new world of health care. Some would profess the altruism of providing better care either locally or at a distance, but in the long run, economics and market share are the driving force. They have not been concerned with delivering babies or taking out gallbladders for a long time.
Few can predict what the new world will look like, but it is quite certain that the Affordable Care Act will re-create or substantially modify American health care as we know it. The potential of attracting thousands of previously uninsured patients, who – with the help of the federal government – can come in the front door for care rather that using the back door of the emergency department for treatment, will be an important target.
In this environment, the practicing physician is caught in the changing tide. Many who are not in the swim will be washed up on the beach. Cardiology, along with oncology and gastroenterology, are the prime targets for the anticipated efficiencies evolving from the hospital system expansions and mergers. Although there is a well-recognized need for primary care health care professionals, much of this need is already being filled by nonphysician professionals. It is possible that cardiology, which has been one of the star profit centers, could become a target for consolidation and economy in the future. We may be seeing some of this, as the opportunities for finishing trainees appear to be diminishing.
It is obvious that there has been a major shift in the setting cardiology practices in the last few years. Since 2007, the proportion of physician-owned practices has decreased substantially. While the number of cardiologists employed by hospitals has grown from 11% to 35%, physician-owned practices have decreased from 59% to 36%. This migration of private practice to hospital-based practice is sure to continue. Cardiology practice will soon be directed by managers representing corporate health care who will be intent on putting in place programs that will establish protocol-driven therapy in the name of "quality" and "cost." Many of the changes will lead to better outcomes. Patient "satisfaction" will be measured by metrics already operational in the corporate environment. As the era of cardiology entrepreneurism faces institutional controls, such profit centers as imaging already are facing significant obstacles. The "down side" of this process will be the death of medical care as we knew it. It was not all bad. That mode of physician-driven, patient-centered care will not be easily transferred into the corporate care environment. The physician will need to ensure that at least that vestige of old-style medical care will not be entirely lost.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
In bygone days, your community hospital was a place where babies were born and gallbladders were removed. They were often run by city governments or local religious organizations. Of course, a lot has changed since then. Now your hospital advertises on television and extols the viewer about the medical miracles that are performed inside its walls. They also are getting bigger and merging with smaller and occasionally coequal institutions in the name of efficiency and in the effort to expand their patient catchment.
In an even larger sense, hospitals have been gobbled up by insurance companies and by for-profit networks in an attempt to maximize profits and minimize overhead. Some prestigious hospitals like the Mayo Clinic, the Cleveland Clinic, and MD Anderson Cancer Center have even established affiliations with community hospitals thousands of miles away seemingly to improve local care and at the same time to expand their referral network. Where local hospitals are not adequate and the market beckons, some have even built their own facilities not only in the United States, but also in countries around the globe.
The intent of these network affiliations is not only to improve their image but to impart some of their prestige to the local entities as well. As a result of these mergers and consolidations, they are positioning themselves to be more competitive in the new world of health care. Some would profess the altruism of providing better care either locally or at a distance, but in the long run, economics and market share are the driving force. They have not been concerned with delivering babies or taking out gallbladders for a long time.
Few can predict what the new world will look like, but it is quite certain that the Affordable Care Act will re-create or substantially modify American health care as we know it. The potential of attracting thousands of previously uninsured patients, who – with the help of the federal government – can come in the front door for care rather that using the back door of the emergency department for treatment, will be an important target.
In this environment, the practicing physician is caught in the changing tide. Many who are not in the swim will be washed up on the beach. Cardiology, along with oncology and gastroenterology, are the prime targets for the anticipated efficiencies evolving from the hospital system expansions and mergers. Although there is a well-recognized need for primary care health care professionals, much of this need is already being filled by nonphysician professionals. It is possible that cardiology, which has been one of the star profit centers, could become a target for consolidation and economy in the future. We may be seeing some of this, as the opportunities for finishing trainees appear to be diminishing.
It is obvious that there has been a major shift in the setting cardiology practices in the last few years. Since 2007, the proportion of physician-owned practices has decreased substantially. While the number of cardiologists employed by hospitals has grown from 11% to 35%, physician-owned practices have decreased from 59% to 36%. This migration of private practice to hospital-based practice is sure to continue. Cardiology practice will soon be directed by managers representing corporate health care who will be intent on putting in place programs that will establish protocol-driven therapy in the name of "quality" and "cost." Many of the changes will lead to better outcomes. Patient "satisfaction" will be measured by metrics already operational in the corporate environment. As the era of cardiology entrepreneurism faces institutional controls, such profit centers as imaging already are facing significant obstacles. The "down side" of this process will be the death of medical care as we knew it. It was not all bad. That mode of physician-driven, patient-centered care will not be easily transferred into the corporate care environment. The physician will need to ensure that at least that vestige of old-style medical care will not be entirely lost.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
In bygone days, your community hospital was a place where babies were born and gallbladders were removed. They were often run by city governments or local religious organizations. Of course, a lot has changed since then. Now your hospital advertises on television and extols the viewer about the medical miracles that are performed inside its walls. They also are getting bigger and merging with smaller and occasionally coequal institutions in the name of efficiency and in the effort to expand their patient catchment.
In an even larger sense, hospitals have been gobbled up by insurance companies and by for-profit networks in an attempt to maximize profits and minimize overhead. Some prestigious hospitals like the Mayo Clinic, the Cleveland Clinic, and MD Anderson Cancer Center have even established affiliations with community hospitals thousands of miles away seemingly to improve local care and at the same time to expand their referral network. Where local hospitals are not adequate and the market beckons, some have even built their own facilities not only in the United States, but also in countries around the globe.
The intent of these network affiliations is not only to improve their image but to impart some of their prestige to the local entities as well. As a result of these mergers and consolidations, they are positioning themselves to be more competitive in the new world of health care. Some would profess the altruism of providing better care either locally or at a distance, but in the long run, economics and market share are the driving force. They have not been concerned with delivering babies or taking out gallbladders for a long time.
Few can predict what the new world will look like, but it is quite certain that the Affordable Care Act will re-create or substantially modify American health care as we know it. The potential of attracting thousands of previously uninsured patients, who – with the help of the federal government – can come in the front door for care rather that using the back door of the emergency department for treatment, will be an important target.
In this environment, the practicing physician is caught in the changing tide. Many who are not in the swim will be washed up on the beach. Cardiology, along with oncology and gastroenterology, are the prime targets for the anticipated efficiencies evolving from the hospital system expansions and mergers. Although there is a well-recognized need for primary care health care professionals, much of this need is already being filled by nonphysician professionals. It is possible that cardiology, which has been one of the star profit centers, could become a target for consolidation and economy in the future. We may be seeing some of this, as the opportunities for finishing trainees appear to be diminishing.
It is obvious that there has been a major shift in the setting cardiology practices in the last few years. Since 2007, the proportion of physician-owned practices has decreased substantially. While the number of cardiologists employed by hospitals has grown from 11% to 35%, physician-owned practices have decreased from 59% to 36%. This migration of private practice to hospital-based practice is sure to continue. Cardiology practice will soon be directed by managers representing corporate health care who will be intent on putting in place programs that will establish protocol-driven therapy in the name of "quality" and "cost." Many of the changes will lead to better outcomes. Patient "satisfaction" will be measured by metrics already operational in the corporate environment. As the era of cardiology entrepreneurism faces institutional controls, such profit centers as imaging already are facing significant obstacles. The "down side" of this process will be the death of medical care as we knew it. It was not all bad. That mode of physician-driven, patient-centered care will not be easily transferred into the corporate care environment. The physician will need to ensure that at least that vestige of old-style medical care will not be entirely lost.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Conflict between randomized and registry trials
A recent spate of observational or registry analyses have challenged conventional wisdom derived from randomized clinical trials (RCTs). Both RCTs and registries have inherent flaws, but both provide important information in regard to drug efficacy in the search for "truth."
RCTs examine therapeutic effects in highly selected patient populations by focusing on one clinical entity, thereby excluding many patients with comorbidities that could influence or blunt the effect of the intervention. In a sense, RCTs do not represent the real-world expression of disease, since diseases rarely exists in isolation.
Registry trials collect large numbers of patients with a particular diagnosis within a large database. They include unselected patients and examine the effect of therapy in one disease regardless of comorbidities and are subject to both doctor and patient bias and confounding by comorbidities like chronic renal and pulmonary disease and, above all, are not randomized. Using a contemporary analogy, RCTs are a rifle shot whereas registries are more of a shotgun blast.
There have been two recent important targets for clinical research in heart failure. One is the search for better therapy for heart failure patients with preserved ejection fraction (HFPEF). The other is a search for drugs or devices that can provide added benefit to contemporary therapy for heart failure with reduced ejection fraction (HFREF)
The observation that many HFPEF patients develop heart failure despite current therapy with renin angiotensin aldosterone system (RAAS) antagonists and beta-blockers has led to a search for better therapy. RCTs with newer agents. including focused therapy with new RAAS antagonists, have failed to affect mortality in HFPEF (Lancet 2003;362:759-66). In contrast, a recent publication using the Swedish Heart Failure Registry (JAMA 2012;308:2108-17) found that patients treated with RAAS antagonists benefited compared with patients not taking them. The failure of the newer drugs to reach significance was attributed to flawed patient selection in RCTs that led to lower mortality rates and rendered the trials underpowered.
Similar discordance was observed between RCT and registry data in patients with HFREF who were treated with aldosterone antagonist (AA) in addition to contemporary RAAS antagonists and beta-blocker therapy. Using the Medicare database (JAMA 2012;308:2097-107), the investigators failed to observe any treatment benefit of AA on mortality that had previously been reported (N. Engl. J. Med. 1999;341:709-17). They did observe a decrease in rehospitalization for heart failure associated with an increase in rehospitalization for hyperkalemia. The authors attributed the reported benefit in the RCT to the exclusion of older and diabetic patients in addition to those with renal impairment, who were included in the registry analysis and reflected the real world of HFREF.
One registry study examining the benefit of ICDs in heart failure patients (JAMA 2013;309:55-62) from the analysis by the National Cardiovascular Registry did support the mortality benefit observed in the RCT (N. Engl. J. Med. 2002; 346:877-83).
As RCTs have developed over the last half-century, they have changed from investigations of therapeutic concepts to assessments of the efficacy of new and, often, expensive drugs. Much of this search has been supported by the pharmaceutical and device industries, which are intent on more focused research because of their concern about the "noise" generated by comorbidities that could obscure the benefit of their product. As a result, RCTs have identified lower-risk, homogeneous patient populations that may not reflect the real-world experience. Nevertheless, registry studies suffer from the major effect of bias, which is influenced by the physicians’ therapeutic choices and can distort the observed outcome. Unfortunately, the search for "truth" in clinical research remains often out of our reach.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
A recent spate of observational or registry analyses have challenged conventional wisdom derived from randomized clinical trials (RCTs). Both RCTs and registries have inherent flaws, but both provide important information in regard to drug efficacy in the search for "truth."
RCTs examine therapeutic effects in highly selected patient populations by focusing on one clinical entity, thereby excluding many patients with comorbidities that could influence or blunt the effect of the intervention. In a sense, RCTs do not represent the real-world expression of disease, since diseases rarely exists in isolation.
Registry trials collect large numbers of patients with a particular diagnosis within a large database. They include unselected patients and examine the effect of therapy in one disease regardless of comorbidities and are subject to both doctor and patient bias and confounding by comorbidities like chronic renal and pulmonary disease and, above all, are not randomized. Using a contemporary analogy, RCTs are a rifle shot whereas registries are more of a shotgun blast.
There have been two recent important targets for clinical research in heart failure. One is the search for better therapy for heart failure patients with preserved ejection fraction (HFPEF). The other is a search for drugs or devices that can provide added benefit to contemporary therapy for heart failure with reduced ejection fraction (HFREF)
The observation that many HFPEF patients develop heart failure despite current therapy with renin angiotensin aldosterone system (RAAS) antagonists and beta-blockers has led to a search for better therapy. RCTs with newer agents. including focused therapy with new RAAS antagonists, have failed to affect mortality in HFPEF (Lancet 2003;362:759-66). In contrast, a recent publication using the Swedish Heart Failure Registry (JAMA 2012;308:2108-17) found that patients treated with RAAS antagonists benefited compared with patients not taking them. The failure of the newer drugs to reach significance was attributed to flawed patient selection in RCTs that led to lower mortality rates and rendered the trials underpowered.
Similar discordance was observed between RCT and registry data in patients with HFREF who were treated with aldosterone antagonist (AA) in addition to contemporary RAAS antagonists and beta-blocker therapy. Using the Medicare database (JAMA 2012;308:2097-107), the investigators failed to observe any treatment benefit of AA on mortality that had previously been reported (N. Engl. J. Med. 1999;341:709-17). They did observe a decrease in rehospitalization for heart failure associated with an increase in rehospitalization for hyperkalemia. The authors attributed the reported benefit in the RCT to the exclusion of older and diabetic patients in addition to those with renal impairment, who were included in the registry analysis and reflected the real world of HFREF.
One registry study examining the benefit of ICDs in heart failure patients (JAMA 2013;309:55-62) from the analysis by the National Cardiovascular Registry did support the mortality benefit observed in the RCT (N. Engl. J. Med. 2002; 346:877-83).
As RCTs have developed over the last half-century, they have changed from investigations of therapeutic concepts to assessments of the efficacy of new and, often, expensive drugs. Much of this search has been supported by the pharmaceutical and device industries, which are intent on more focused research because of their concern about the "noise" generated by comorbidities that could obscure the benefit of their product. As a result, RCTs have identified lower-risk, homogeneous patient populations that may not reflect the real-world experience. Nevertheless, registry studies suffer from the major effect of bias, which is influenced by the physicians’ therapeutic choices and can distort the observed outcome. Unfortunately, the search for "truth" in clinical research remains often out of our reach.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
A recent spate of observational or registry analyses have challenged conventional wisdom derived from randomized clinical trials (RCTs). Both RCTs and registries have inherent flaws, but both provide important information in regard to drug efficacy in the search for "truth."
RCTs examine therapeutic effects in highly selected patient populations by focusing on one clinical entity, thereby excluding many patients with comorbidities that could influence or blunt the effect of the intervention. In a sense, RCTs do not represent the real-world expression of disease, since diseases rarely exists in isolation.
Registry trials collect large numbers of patients with a particular diagnosis within a large database. They include unselected patients and examine the effect of therapy in one disease regardless of comorbidities and are subject to both doctor and patient bias and confounding by comorbidities like chronic renal and pulmonary disease and, above all, are not randomized. Using a contemporary analogy, RCTs are a rifle shot whereas registries are more of a shotgun blast.
There have been two recent important targets for clinical research in heart failure. One is the search for better therapy for heart failure patients with preserved ejection fraction (HFPEF). The other is a search for drugs or devices that can provide added benefit to contemporary therapy for heart failure with reduced ejection fraction (HFREF)
The observation that many HFPEF patients develop heart failure despite current therapy with renin angiotensin aldosterone system (RAAS) antagonists and beta-blockers has led to a search for better therapy. RCTs with newer agents. including focused therapy with new RAAS antagonists, have failed to affect mortality in HFPEF (Lancet 2003;362:759-66). In contrast, a recent publication using the Swedish Heart Failure Registry (JAMA 2012;308:2108-17) found that patients treated with RAAS antagonists benefited compared with patients not taking them. The failure of the newer drugs to reach significance was attributed to flawed patient selection in RCTs that led to lower mortality rates and rendered the trials underpowered.
Similar discordance was observed between RCT and registry data in patients with HFREF who were treated with aldosterone antagonist (AA) in addition to contemporary RAAS antagonists and beta-blocker therapy. Using the Medicare database (JAMA 2012;308:2097-107), the investigators failed to observe any treatment benefit of AA on mortality that had previously been reported (N. Engl. J. Med. 1999;341:709-17). They did observe a decrease in rehospitalization for heart failure associated with an increase in rehospitalization for hyperkalemia. The authors attributed the reported benefit in the RCT to the exclusion of older and diabetic patients in addition to those with renal impairment, who were included in the registry analysis and reflected the real world of HFREF.
One registry study examining the benefit of ICDs in heart failure patients (JAMA 2013;309:55-62) from the analysis by the National Cardiovascular Registry did support the mortality benefit observed in the RCT (N. Engl. J. Med. 2002; 346:877-83).
As RCTs have developed over the last half-century, they have changed from investigations of therapeutic concepts to assessments of the efficacy of new and, often, expensive drugs. Much of this search has been supported by the pharmaceutical and device industries, which are intent on more focused research because of their concern about the "noise" generated by comorbidities that could obscure the benefit of their product. As a result, RCTs have identified lower-risk, homogeneous patient populations that may not reflect the real-world experience. Nevertheless, registry studies suffer from the major effect of bias, which is influenced by the physicians’ therapeutic choices and can distort the observed outcome. Unfortunately, the search for "truth" in clinical research remains often out of our reach.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The Heart Team
Although to many, the concept of Heart Teams, as examined in Mitchel L. Zoler’s article, "Heart teams inch into routine cardiac practice," seem novel, such collaborations were the norm at the dawn of cardiac surgery.
Beginning with the surgical approach to valvular and, later, coronary vascular surgery, the interaction between cardiac physiologists (as they were called then), coronary angiographers, and cardiac surgeons in deciding where and when to operate was often difficult and contentious. Cardiac surgery was a high-risk procedure, and the outcomes were uncertain. Over the last 50 years we have come a long way and much of what we do is almost commonplace, as frequently performed as a cholecystectomy or appendectomy and with similar risks. Over time, we have become casual with our decision-making process. Both cardiologists and cardiac surgeons have staked out their own therapeutic parameters. Specialty society guidelines have provided important boundaries within which we can and should operate.
At the same time, we continue to push the envelope to identify therapeutic targets and technologies. We have developed complex interventional and surgical procedures and have applied them to older and sicker patient populations. New technology has opened avenues of therapy that we could not have imagined at the inception of interventional cardiology and cardiac surgery.
The advanced interventional surgical approach now requires even greater interaction with more special players in both cardiology and surgery. Although the modern cardiology practice is built on everyday procedures that provide the platform on which we treat a variety of cardiac issues that commonly do not require ongoing group interactions, the new treatment options require a more interactive and collegial environment. It is in this domain that the Heart Team has an important role and has found success. It was re-initiated as a result of the development of the transcatheter aortic valve implantation, which requires close cardiology and surgical interaction. It has expanded as a team approach to the treatment choices in the care of patients with structural heart disease.
Definitions of the boundaries of the new therapies raise important economic and professional challenges. The Heart Team as currently organized provides the framework of that discourse. To some, it will represent an inconvenience and an obstruction to their individual professional performance: The requirement to participate in a structured interaction is just one more barrier to the daily performance of their skills. To others, it will provide an important process that will improve performance: It is an opportunity to coordinate the different skills required for the advance treatments and, more importantly, it represents a forum to educate not only the current participants but also the physician, nurses, and technicians for the future. The discussion and planning for the surgical approach for a particular patient provides a dynamic discussion of the therapeutic options and the important decisions about appropriateness of the procedure. This interactive learning process is critical to the interdisciplinary training of all present and future players.
The growth of cardiovascular therapy has led to the construction of large stand-alone units or sections within hospitals identified as heart centers or institutes. The creation of these facilities provides the professional structure and financial environment to create the Heart Team and answer some of the issues raised in the article in this issue. Initially devised as a combination of marketing and professional associations, they now can provide the educational and scientific structure of the Heart Team.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Although to many, the concept of Heart Teams, as examined in Mitchel L. Zoler’s article, "Heart teams inch into routine cardiac practice," seem novel, such collaborations were the norm at the dawn of cardiac surgery.
Beginning with the surgical approach to valvular and, later, coronary vascular surgery, the interaction between cardiac physiologists (as they were called then), coronary angiographers, and cardiac surgeons in deciding where and when to operate was often difficult and contentious. Cardiac surgery was a high-risk procedure, and the outcomes were uncertain. Over the last 50 years we have come a long way and much of what we do is almost commonplace, as frequently performed as a cholecystectomy or appendectomy and with similar risks. Over time, we have become casual with our decision-making process. Both cardiologists and cardiac surgeons have staked out their own therapeutic parameters. Specialty society guidelines have provided important boundaries within which we can and should operate.
At the same time, we continue to push the envelope to identify therapeutic targets and technologies. We have developed complex interventional and surgical procedures and have applied them to older and sicker patient populations. New technology has opened avenues of therapy that we could not have imagined at the inception of interventional cardiology and cardiac surgery.
The advanced interventional surgical approach now requires even greater interaction with more special players in both cardiology and surgery. Although the modern cardiology practice is built on everyday procedures that provide the platform on which we treat a variety of cardiac issues that commonly do not require ongoing group interactions, the new treatment options require a more interactive and collegial environment. It is in this domain that the Heart Team has an important role and has found success. It was re-initiated as a result of the development of the transcatheter aortic valve implantation, which requires close cardiology and surgical interaction. It has expanded as a team approach to the treatment choices in the care of patients with structural heart disease.
Definitions of the boundaries of the new therapies raise important economic and professional challenges. The Heart Team as currently organized provides the framework of that discourse. To some, it will represent an inconvenience and an obstruction to their individual professional performance: The requirement to participate in a structured interaction is just one more barrier to the daily performance of their skills. To others, it will provide an important process that will improve performance: It is an opportunity to coordinate the different skills required for the advance treatments and, more importantly, it represents a forum to educate not only the current participants but also the physician, nurses, and technicians for the future. The discussion and planning for the surgical approach for a particular patient provides a dynamic discussion of the therapeutic options and the important decisions about appropriateness of the procedure. This interactive learning process is critical to the interdisciplinary training of all present and future players.
The growth of cardiovascular therapy has led to the construction of large stand-alone units or sections within hospitals identified as heart centers or institutes. The creation of these facilities provides the professional structure and financial environment to create the Heart Team and answer some of the issues raised in the article in this issue. Initially devised as a combination of marketing and professional associations, they now can provide the educational and scientific structure of the Heart Team.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Although to many, the concept of Heart Teams, as examined in Mitchel L. Zoler’s article, "Heart teams inch into routine cardiac practice," seem novel, such collaborations were the norm at the dawn of cardiac surgery.
Beginning with the surgical approach to valvular and, later, coronary vascular surgery, the interaction between cardiac physiologists (as they were called then), coronary angiographers, and cardiac surgeons in deciding where and when to operate was often difficult and contentious. Cardiac surgery was a high-risk procedure, and the outcomes were uncertain. Over the last 50 years we have come a long way and much of what we do is almost commonplace, as frequently performed as a cholecystectomy or appendectomy and with similar risks. Over time, we have become casual with our decision-making process. Both cardiologists and cardiac surgeons have staked out their own therapeutic parameters. Specialty society guidelines have provided important boundaries within which we can and should operate.
At the same time, we continue to push the envelope to identify therapeutic targets and technologies. We have developed complex interventional and surgical procedures and have applied them to older and sicker patient populations. New technology has opened avenues of therapy that we could not have imagined at the inception of interventional cardiology and cardiac surgery.
The advanced interventional surgical approach now requires even greater interaction with more special players in both cardiology and surgery. Although the modern cardiology practice is built on everyday procedures that provide the platform on which we treat a variety of cardiac issues that commonly do not require ongoing group interactions, the new treatment options require a more interactive and collegial environment. It is in this domain that the Heart Team has an important role and has found success. It was re-initiated as a result of the development of the transcatheter aortic valve implantation, which requires close cardiology and surgical interaction. It has expanded as a team approach to the treatment choices in the care of patients with structural heart disease.
Definitions of the boundaries of the new therapies raise important economic and professional challenges. The Heart Team as currently organized provides the framework of that discourse. To some, it will represent an inconvenience and an obstruction to their individual professional performance: The requirement to participate in a structured interaction is just one more barrier to the daily performance of their skills. To others, it will provide an important process that will improve performance: It is an opportunity to coordinate the different skills required for the advance treatments and, more importantly, it represents a forum to educate not only the current participants but also the physician, nurses, and technicians for the future. The discussion and planning for the surgical approach for a particular patient provides a dynamic discussion of the therapeutic options and the important decisions about appropriateness of the procedure. This interactive learning process is critical to the interdisciplinary training of all present and future players.
The growth of cardiovascular therapy has led to the construction of large stand-alone units or sections within hospitals identified as heart centers or institutes. The creation of these facilities provides the professional structure and financial environment to create the Heart Team and answer some of the issues raised in the article in this issue. Initially devised as a combination of marketing and professional associations, they now can provide the educational and scientific structure of the Heart Team.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Hospital readmissions under attack
Readmissions after hospital discharge for acute myocardial infarction, heart failure, and pneumonia have now become major targets for proposed Medicare savings as part of the current budget tightening in Washington. Hospitals in the past have viewed readmissions either with disdain and disinterest or as a "cash cow."
Readmissions have been good business, as long as Medicare reimbursed hospitals for individual admissions no matter how long or short or how frequent. Readmissions are estimated to cost $17 billion annually. As Medicare costs continue to increase, the control of readmissions appears to be a good target for saving some money. As a result, Medicare levied a maximum reduction of 1% on payments last year on 307 of the nation’s hospitals that were deemed to have too many readmissions (New York Times, Nov. 26, 2012).
Readmissions for AMI and heart failure are among the most frequent hospital admissions and readmissions. Readmissions in cardiology have been an important outcome measure in clinical trials for the last half century. As mortality rates decreased over the years, rehospitalization became more important as clinicians realized its importance in the composite outcome measure of cost and benefit of new therapies. Two of the potential causes of readmission have been early discharge and the lack of postdischarge medical support. The urgency for early discharge for both heart failure and AMI has been driven largely by the misplaced emphasis on shorter hospital stays.
A recent international trial examined readmission rates as an outcome measure in patients who were treated with a percutaneous coronary intervention after an ST-elevation MI. According to that study, the readmission rate in the United States is almost twice that of European centers. Much of this increase was related to a shorter hospital stay in the United States that was half that of the European centers: 8 vs. 3 days (JAMA 2012;307:66-74).
In the last few years there has actually been a speed contest in some cardiology quarters to see how quickly patients can be discharged after a STEMI. As a result, a "drive through" mentality for percutaneous coronary intervention and AMI treatment has developed. Some of this has been generated by hospital administration, but with full participation by cardiologists. There appears to be little or no benefit to the short stay other than on the hospital bottom line. It now appears that, in the future, the financial benefit of this expedited care will be challenged.
Heart failure admissions suffer from similar expedited care. The duration of a hospital stay for heart failure decreased from 8.8 to 6.3 days between 1996 and 2006. Similar international disparity exists as observed with AMI. The rate of readmission in 30 days after discharge is estimated to be roughly 20%. The occurrence of readmission within 30 days is not just an abstract statistic and an inconvenience to patients but is associated with a mortality in the same period of 6.4%, which exceeded inpatient mortality (JAMA 2010;303;2141-7).
Many patients admitted with fluid overload leave the hospital on the same medication that they were taking prior to admission and at the same weight as at admission. Some of this is the result of undertreatment with diuretics, driven by misconceptions about serum creatinine levels, but in many situations patients may not even be weighed. Heart failure patients are often elderly who have significant concomitant disease and require careful in-hospital modification of heart failure therapy. Many of these elderly patients also require the institution of medical and social support prior to discharge.
Inner-city and referral hospitals indicate that they are being unfairly penalized by the nature of the demographic and severity of their patient mix. Some of this pushback is warranted. The "one size fits all" approach by Medicare may well require some modification in view of the variation in both the medical and social complexity. Some form of staging of severity and the need for outpatient nurse support needs to be considered.
Hospitals, nevertheless, are scrambling to respond to the Medicare threat and have begun to apply resources and innovation to solve this pressing issue. Cardiologists themselves also can have an important impact on the problem. We all need to slow down and spend some time dealing with the long-term solutions to short-term problems like acute heart failure and AMI.
Dr. Goldstein writes the column, "Heart of the Matter," which appears regularly in Cardiology News, a Frontline Medical Communications publication. He is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Readmissions after hospital discharge for acute myocardial infarction, heart failure, and pneumonia have now become major targets for proposed Medicare savings as part of the current budget tightening in Washington. Hospitals in the past have viewed readmissions either with disdain and disinterest or as a "cash cow."
Readmissions have been good business, as long as Medicare reimbursed hospitals for individual admissions no matter how long or short or how frequent. Readmissions are estimated to cost $17 billion annually. As Medicare costs continue to increase, the control of readmissions appears to be a good target for saving some money. As a result, Medicare levied a maximum reduction of 1% on payments last year on 307 of the nation’s hospitals that were deemed to have too many readmissions (New York Times, Nov. 26, 2012).
Readmissions for AMI and heart failure are among the most frequent hospital admissions and readmissions. Readmissions in cardiology have been an important outcome measure in clinical trials for the last half century. As mortality rates decreased over the years, rehospitalization became more important as clinicians realized its importance in the composite outcome measure of cost and benefit of new therapies. Two of the potential causes of readmission have been early discharge and the lack of postdischarge medical support. The urgency for early discharge for both heart failure and AMI has been driven largely by the misplaced emphasis on shorter hospital stays.
A recent international trial examined readmission rates as an outcome measure in patients who were treated with a percutaneous coronary intervention after an ST-elevation MI. According to that study, the readmission rate in the United States is almost twice that of European centers. Much of this increase was related to a shorter hospital stay in the United States that was half that of the European centers: 8 vs. 3 days (JAMA 2012;307:66-74).
In the last few years there has actually been a speed contest in some cardiology quarters to see how quickly patients can be discharged after a STEMI. As a result, a "drive through" mentality for percutaneous coronary intervention and AMI treatment has developed. Some of this has been generated by hospital administration, but with full participation by cardiologists. There appears to be little or no benefit to the short stay other than on the hospital bottom line. It now appears that, in the future, the financial benefit of this expedited care will be challenged.
Heart failure admissions suffer from similar expedited care. The duration of a hospital stay for heart failure decreased from 8.8 to 6.3 days between 1996 and 2006. Similar international disparity exists as observed with AMI. The rate of readmission in 30 days after discharge is estimated to be roughly 20%. The occurrence of readmission within 30 days is not just an abstract statistic and an inconvenience to patients but is associated with a mortality in the same period of 6.4%, which exceeded inpatient mortality (JAMA 2010;303;2141-7).
Many patients admitted with fluid overload leave the hospital on the same medication that they were taking prior to admission and at the same weight as at admission. Some of this is the result of undertreatment with diuretics, driven by misconceptions about serum creatinine levels, but in many situations patients may not even be weighed. Heart failure patients are often elderly who have significant concomitant disease and require careful in-hospital modification of heart failure therapy. Many of these elderly patients also require the institution of medical and social support prior to discharge.
Inner-city and referral hospitals indicate that they are being unfairly penalized by the nature of the demographic and severity of their patient mix. Some of this pushback is warranted. The "one size fits all" approach by Medicare may well require some modification in view of the variation in both the medical and social complexity. Some form of staging of severity and the need for outpatient nurse support needs to be considered.
Hospitals, nevertheless, are scrambling to respond to the Medicare threat and have begun to apply resources and innovation to solve this pressing issue. Cardiologists themselves also can have an important impact on the problem. We all need to slow down and spend some time dealing with the long-term solutions to short-term problems like acute heart failure and AMI.
Dr. Goldstein writes the column, "Heart of the Matter," which appears regularly in Cardiology News, a Frontline Medical Communications publication. He is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Readmissions after hospital discharge for acute myocardial infarction, heart failure, and pneumonia have now become major targets for proposed Medicare savings as part of the current budget tightening in Washington. Hospitals in the past have viewed readmissions either with disdain and disinterest or as a "cash cow."
Readmissions have been good business, as long as Medicare reimbursed hospitals for individual admissions no matter how long or short or how frequent. Readmissions are estimated to cost $17 billion annually. As Medicare costs continue to increase, the control of readmissions appears to be a good target for saving some money. As a result, Medicare levied a maximum reduction of 1% on payments last year on 307 of the nation’s hospitals that were deemed to have too many readmissions (New York Times, Nov. 26, 2012).
Readmissions for AMI and heart failure are among the most frequent hospital admissions and readmissions. Readmissions in cardiology have been an important outcome measure in clinical trials for the last half century. As mortality rates decreased over the years, rehospitalization became more important as clinicians realized its importance in the composite outcome measure of cost and benefit of new therapies. Two of the potential causes of readmission have been early discharge and the lack of postdischarge medical support. The urgency for early discharge for both heart failure and AMI has been driven largely by the misplaced emphasis on shorter hospital stays.
A recent international trial examined readmission rates as an outcome measure in patients who were treated with a percutaneous coronary intervention after an ST-elevation MI. According to that study, the readmission rate in the United States is almost twice that of European centers. Much of this increase was related to a shorter hospital stay in the United States that was half that of the European centers: 8 vs. 3 days (JAMA 2012;307:66-74).
In the last few years there has actually been a speed contest in some cardiology quarters to see how quickly patients can be discharged after a STEMI. As a result, a "drive through" mentality for percutaneous coronary intervention and AMI treatment has developed. Some of this has been generated by hospital administration, but with full participation by cardiologists. There appears to be little or no benefit to the short stay other than on the hospital bottom line. It now appears that, in the future, the financial benefit of this expedited care will be challenged.
Heart failure admissions suffer from similar expedited care. The duration of a hospital stay for heart failure decreased from 8.8 to 6.3 days between 1996 and 2006. Similar international disparity exists as observed with AMI. The rate of readmission in 30 days after discharge is estimated to be roughly 20%. The occurrence of readmission within 30 days is not just an abstract statistic and an inconvenience to patients but is associated with a mortality in the same period of 6.4%, which exceeded inpatient mortality (JAMA 2010;303;2141-7).
Many patients admitted with fluid overload leave the hospital on the same medication that they were taking prior to admission and at the same weight as at admission. Some of this is the result of undertreatment with diuretics, driven by misconceptions about serum creatinine levels, but in many situations patients may not even be weighed. Heart failure patients are often elderly who have significant concomitant disease and require careful in-hospital modification of heart failure therapy. Many of these elderly patients also require the institution of medical and social support prior to discharge.
Inner-city and referral hospitals indicate that they are being unfairly penalized by the nature of the demographic and severity of their patient mix. Some of this pushback is warranted. The "one size fits all" approach by Medicare may well require some modification in view of the variation in both the medical and social complexity. Some form of staging of severity and the need for outpatient nurse support needs to be considered.
Hospitals, nevertheless, are scrambling to respond to the Medicare threat and have begun to apply resources and innovation to solve this pressing issue. Cardiologists themselves also can have an important impact on the problem. We all need to slow down and spend some time dealing with the long-term solutions to short-term problems like acute heart failure and AMI.
Dr. Goldstein writes the column, "Heart of the Matter," which appears regularly in Cardiology News, a Frontline Medical Communications publication. He is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Beta-Blockers and Acute Myocardial Infarction
From the time that propranolol significantly lowered mortality after an acute myocardial infarction in the Beta-Blocker Heart Attack Trial in 1981, it took nearly 20 years for beta-blocker therapy to take hold as standard practice in AMI patients. Now, results of a recent trial may cause many to question the established therapy.
The Nov. 6, 1981, issue of JAMA announced that the National Heart, Lung, and Blood Institute had "taken the unusual step of curtailing" the Beta-Blocker Heart Attack Trial (BHAT) on the basis of findings that treatment of patients with the beta-adrenergic blocking agent, propranolol, resulted in a 26% decrease in all-cause mortality and a 23% decrease in sudden death (JAMA 1982;247:1707-14).
The study included 3,837 patients treated within 5-21 days of an acute myocardial infarction (AMI) and randomized to either propranolol 160-240 mg/day or placebo. Two-thirds of the patients had a ST-elevation MI; the remaining patients had symptoms compatible with an AMI with electrocardiographic changes accompanied by an increase in serum enzymatic elevations (serum glutamic oxaloacetic transaminase or creatine phosphokinase). This followed the report of similar results in Europe with the beta-blocker timolol in a similar group of patients. Since those early reports of randomized clinical trials, based on a subgroup analysis of BHAT, confirmed the benefit of beta-blocker therapy for both ischemic and nonischemic systolic heart failure. As steering committee chair of BHAT, I was excited about the result of our study and anticipated that beta-blocker therapy would rapidly become part of the treatment of AMI.
This was not to be. It took almost 20 years before beta-blocker therapy was incorporated into the standard treatment of AMI patients. In the interval, thousands of patients who could have benefited with this therapy died. As late as 1998, fewer than 50% of AMI patients without contraindication to therapy received that class of drug.
Why did it take so long? At the time of the BHAT results, many of the leading academic cardiologists were enamored with the use of calcium entry blocking agents for AMI, for which there were little data but a lot of encouragement by pharmaceutical companies. When propranolol went off patent and became available as a generic, there was little industrial support to publicize its benefit. Furthermore, there was little interest at the National Heart, Lung, and Blood Institute to educate physicians about the importance of BHAT. In 1996, beta-blocker therapy post-AMI was established as a quality standard by the National Committee for Quality Assurance (NCQA). At about the same time, it was incorporated in the American College of Cardiology guidelines. Not until 2000, 19 years after the initial report of beta-blocker therapy, did its usage reach 90% at discharge. A recent study from the NCQA indicated that 6 months after discharge only 71% of patients were taking the medication.
In the intervening 2 decades, the definition of an AMI has dramatically changed as a result of more sensitive, if less specific, enzyme measurements. In 1981, most of the patients in BHAT had a STEMI, whereas contemporary clinical trials include less than one-third STEMI patients. Therapy certainly has changed: first with the use of thrombolytic therapy and subsequently with the wide use of interventional angioplasty technology, particularly in the STEMI population. Aspirin, statins, and ACE inhibitors have also been added to the therapeutic mix.
Now, an observational study of almost 7,000 patients with a history of an AMI collected in 2004 and followed for 43 months, suggests that beta-blocker therapy is no longer necessary. Using a composite end point including cardiovascular death, nonfatal MI, or stroke, patients receiving propranolol had an event rate of 16.9%, compared with 18.6% not taking beta-blocker (hazard ratio, .90; P less than .14) (JAMA 2012;308:1340-9). It should be noted, however, that 74% of the patients in the study had a history of hypertension, 44% angina, and 22% heart failure, all clinical problems for which beta-blockers have been proven to be effective.
The most recent American College of Cardiology/American Heart Association guidelines suggest that beta-blocker therapy "is greatest (benefit) among patients with recent myocardial infarction [of up to 3 years prior] and/or left ventricular systolic dysfunction [left ventricular ejection fraction of 40% or less]. For those patient without these class I indication, [beta]-blocker therapy is optional (class IIa or IIb) (Circulation 2011;124:2458-73). I suppose that if you can find an AMI patient without hypertension, angina, or heart failure, discontinuing beta-blocker therapy could be justified. Until that rare patient appears in my office, I plan to maintain beta-blockers in my post-AMI patients.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
This column, "Heart of the Matter," appears regularly in Cardiology News.
From the time that propranolol significantly lowered mortality after an acute myocardial infarction in the Beta-Blocker Heart Attack Trial in 1981, it took nearly 20 years for beta-blocker therapy to take hold as standard practice in AMI patients. Now, results of a recent trial may cause many to question the established therapy.
The Nov. 6, 1981, issue of JAMA announced that the National Heart, Lung, and Blood Institute had "taken the unusual step of curtailing" the Beta-Blocker Heart Attack Trial (BHAT) on the basis of findings that treatment of patients with the beta-adrenergic blocking agent, propranolol, resulted in a 26% decrease in all-cause mortality and a 23% decrease in sudden death (JAMA 1982;247:1707-14).
The study included 3,837 patients treated within 5-21 days of an acute myocardial infarction (AMI) and randomized to either propranolol 160-240 mg/day or placebo. Two-thirds of the patients had a ST-elevation MI; the remaining patients had symptoms compatible with an AMI with electrocardiographic changes accompanied by an increase in serum enzymatic elevations (serum glutamic oxaloacetic transaminase or creatine phosphokinase). This followed the report of similar results in Europe with the beta-blocker timolol in a similar group of patients. Since those early reports of randomized clinical trials, based on a subgroup analysis of BHAT, confirmed the benefit of beta-blocker therapy for both ischemic and nonischemic systolic heart failure. As steering committee chair of BHAT, I was excited about the result of our study and anticipated that beta-blocker therapy would rapidly become part of the treatment of AMI.
This was not to be. It took almost 20 years before beta-blocker therapy was incorporated into the standard treatment of AMI patients. In the interval, thousands of patients who could have benefited with this therapy died. As late as 1998, fewer than 50% of AMI patients without contraindication to therapy received that class of drug.
Why did it take so long? At the time of the BHAT results, many of the leading academic cardiologists were enamored with the use of calcium entry blocking agents for AMI, for which there were little data but a lot of encouragement by pharmaceutical companies. When propranolol went off patent and became available as a generic, there was little industrial support to publicize its benefit. Furthermore, there was little interest at the National Heart, Lung, and Blood Institute to educate physicians about the importance of BHAT. In 1996, beta-blocker therapy post-AMI was established as a quality standard by the National Committee for Quality Assurance (NCQA). At about the same time, it was incorporated in the American College of Cardiology guidelines. Not until 2000, 19 years after the initial report of beta-blocker therapy, did its usage reach 90% at discharge. A recent study from the NCQA indicated that 6 months after discharge only 71% of patients were taking the medication.
In the intervening 2 decades, the definition of an AMI has dramatically changed as a result of more sensitive, if less specific, enzyme measurements. In 1981, most of the patients in BHAT had a STEMI, whereas contemporary clinical trials include less than one-third STEMI patients. Therapy certainly has changed: first with the use of thrombolytic therapy and subsequently with the wide use of interventional angioplasty technology, particularly in the STEMI population. Aspirin, statins, and ACE inhibitors have also been added to the therapeutic mix.
Now, an observational study of almost 7,000 patients with a history of an AMI collected in 2004 and followed for 43 months, suggests that beta-blocker therapy is no longer necessary. Using a composite end point including cardiovascular death, nonfatal MI, or stroke, patients receiving propranolol had an event rate of 16.9%, compared with 18.6% not taking beta-blocker (hazard ratio, .90; P less than .14) (JAMA 2012;308:1340-9). It should be noted, however, that 74% of the patients in the study had a history of hypertension, 44% angina, and 22% heart failure, all clinical problems for which beta-blockers have been proven to be effective.
The most recent American College of Cardiology/American Heart Association guidelines suggest that beta-blocker therapy "is greatest (benefit) among patients with recent myocardial infarction [of up to 3 years prior] and/or left ventricular systolic dysfunction [left ventricular ejection fraction of 40% or less]. For those patient without these class I indication, [beta]-blocker therapy is optional (class IIa or IIb) (Circulation 2011;124:2458-73). I suppose that if you can find an AMI patient without hypertension, angina, or heart failure, discontinuing beta-blocker therapy could be justified. Until that rare patient appears in my office, I plan to maintain beta-blockers in my post-AMI patients.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
This column, "Heart of the Matter," appears regularly in Cardiology News.
From the time that propranolol significantly lowered mortality after an acute myocardial infarction in the Beta-Blocker Heart Attack Trial in 1981, it took nearly 20 years for beta-blocker therapy to take hold as standard practice in AMI patients. Now, results of a recent trial may cause many to question the established therapy.
The Nov. 6, 1981, issue of JAMA announced that the National Heart, Lung, and Blood Institute had "taken the unusual step of curtailing" the Beta-Blocker Heart Attack Trial (BHAT) on the basis of findings that treatment of patients with the beta-adrenergic blocking agent, propranolol, resulted in a 26% decrease in all-cause mortality and a 23% decrease in sudden death (JAMA 1982;247:1707-14).
The study included 3,837 patients treated within 5-21 days of an acute myocardial infarction (AMI) and randomized to either propranolol 160-240 mg/day or placebo. Two-thirds of the patients had a ST-elevation MI; the remaining patients had symptoms compatible with an AMI with electrocardiographic changes accompanied by an increase in serum enzymatic elevations (serum glutamic oxaloacetic transaminase or creatine phosphokinase). This followed the report of similar results in Europe with the beta-blocker timolol in a similar group of patients. Since those early reports of randomized clinical trials, based on a subgroup analysis of BHAT, confirmed the benefit of beta-blocker therapy for both ischemic and nonischemic systolic heart failure. As steering committee chair of BHAT, I was excited about the result of our study and anticipated that beta-blocker therapy would rapidly become part of the treatment of AMI.
This was not to be. It took almost 20 years before beta-blocker therapy was incorporated into the standard treatment of AMI patients. In the interval, thousands of patients who could have benefited with this therapy died. As late as 1998, fewer than 50% of AMI patients without contraindication to therapy received that class of drug.
Why did it take so long? At the time of the BHAT results, many of the leading academic cardiologists were enamored with the use of calcium entry blocking agents for AMI, for which there were little data but a lot of encouragement by pharmaceutical companies. When propranolol went off patent and became available as a generic, there was little industrial support to publicize its benefit. Furthermore, there was little interest at the National Heart, Lung, and Blood Institute to educate physicians about the importance of BHAT. In 1996, beta-blocker therapy post-AMI was established as a quality standard by the National Committee for Quality Assurance (NCQA). At about the same time, it was incorporated in the American College of Cardiology guidelines. Not until 2000, 19 years after the initial report of beta-blocker therapy, did its usage reach 90% at discharge. A recent study from the NCQA indicated that 6 months after discharge only 71% of patients were taking the medication.
In the intervening 2 decades, the definition of an AMI has dramatically changed as a result of more sensitive, if less specific, enzyme measurements. In 1981, most of the patients in BHAT had a STEMI, whereas contemporary clinical trials include less than one-third STEMI patients. Therapy certainly has changed: first with the use of thrombolytic therapy and subsequently with the wide use of interventional angioplasty technology, particularly in the STEMI population. Aspirin, statins, and ACE inhibitors have also been added to the therapeutic mix.
Now, an observational study of almost 7,000 patients with a history of an AMI collected in 2004 and followed for 43 months, suggests that beta-blocker therapy is no longer necessary. Using a composite end point including cardiovascular death, nonfatal MI, or stroke, patients receiving propranolol had an event rate of 16.9%, compared with 18.6% not taking beta-blocker (hazard ratio, .90; P less than .14) (JAMA 2012;308:1340-9). It should be noted, however, that 74% of the patients in the study had a history of hypertension, 44% angina, and 22% heart failure, all clinical problems for which beta-blockers have been proven to be effective.
The most recent American College of Cardiology/American Heart Association guidelines suggest that beta-blocker therapy "is greatest (benefit) among patients with recent myocardial infarction [of up to 3 years prior] and/or left ventricular systolic dysfunction [left ventricular ejection fraction of 40% or less]. For those patient without these class I indication, [beta]-blocker therapy is optional (class IIa or IIb) (Circulation 2011;124:2458-73). I suppose that if you can find an AMI patient without hypertension, angina, or heart failure, discontinuing beta-blocker therapy could be justified. Until that rare patient appears in my office, I plan to maintain beta-blockers in my post-AMI patients.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
This column, "Heart of the Matter," appears regularly in Cardiology News.
The New Doctor's Office
The doctor’s office, at least my office, has changed over the last few decades with an increase in personnel added to make my life easier. Much of it has occurred as a response to the increased billing and authentication process that is required for reimbursement.
After all, when doctors were paid in cash or with a dozen eggs, there was little need for all the paperwork. Health insurance, both private and federal, has been the cause of much of this. At the same time, medical assistants, registered nurses, and a variety of ancillary staff have been added to make the patient’s visit smoother and to acquire the requisite information to satiate the vast network of communications that are generated with each office visit. All of these personnel are now an undisputable requirement for the function of today’s medical office.
In the process, the distance between the physician and the patient has increased. In many offices today, the patient may never see the doctor during the visit. To an increasing extent, the office contact with the patient is solely by an RN or physician assistant. In most cases, patients are satisfied with the service and are delighted not to spend a long time waiting to see the "doctor." Many of the visits are check-ups or annual or semiannual visits without any associated symptoms that can often be dealt with by a sympathetic and knowledgeable nurse. The patient is the winner to a great extent in this process by acquiring a sensitive ear and an expeditious visit. What is lost is the continued relationship of the patients and their physician. The biggest loss, I would suggest, is the doctor’s satisfaction of providing medical care that comes with every patient encounter, which keeps many of us energized to keep practicing medicine.
Now we have a new vision of how the primary care office of the future will function as a medical home (N. Engl. J. Med. 2012;367:891-3). In this vision, the physicians will be energized by a global payment system that will create an environment in which the doctor’s role is to pass real responsibility to their ancillary staff for which they would be held accountable. According to the authors, the physician’s office will be committed to promoting a healthy environment rather than merely treating disease. Why bother with the simple issue of treating sick patients when you can take on the entire environment of your community to prevent disease?
The authors go on to state that the physician would not waste time focusing on the "10% premature mortality that is influenced by medical treatment." In this work environment, the physician would be the team manager of a host of ancillary personnel, including medical assistants, RNs, social workers, nutritionists, and pharmacists, to name but a few. The physician would be energized by his or her role as a team leader. The physician, the authors explain, would see fewer patients and would not be caught running from room to room to see patients. Instead, he or she will become involved with care of the "community and understanding the upstream determinants of downstream sickness" and would spend there time in the community "advocating for the local farmer’s market to accept food stamps, organizing walking clubs for physical exercise, and lobbying ... to reduce emissions to improve air quality."
This, of course, is a far cry from the doctors who negotiated the care for their patient for a dozen eggs. It is clearly a role that is foreign to my generation. To some extent, though, patients may well gain in this futuristic environment. They will acquire an empathetic nurse who will be sensitive to their needs and who may be as good as a crotchety overworked doctor. All of the ancillary medical staff will gain a larger and more responsible role in the medical home. The physicians will morph into a new role that is more characteristic of an administrator and less as a practitioner. The doctors, however, will be the biggest losers as they disengage from the patient contact and care that is so crucial to the satisfaction of being a doctor.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The doctor’s office, at least my office, has changed over the last few decades with an increase in personnel added to make my life easier. Much of it has occurred as a response to the increased billing and authentication process that is required for reimbursement.
After all, when doctors were paid in cash or with a dozen eggs, there was little need for all the paperwork. Health insurance, both private and federal, has been the cause of much of this. At the same time, medical assistants, registered nurses, and a variety of ancillary staff have been added to make the patient’s visit smoother and to acquire the requisite information to satiate the vast network of communications that are generated with each office visit. All of these personnel are now an undisputable requirement for the function of today’s medical office.
In the process, the distance between the physician and the patient has increased. In many offices today, the patient may never see the doctor during the visit. To an increasing extent, the office contact with the patient is solely by an RN or physician assistant. In most cases, patients are satisfied with the service and are delighted not to spend a long time waiting to see the "doctor." Many of the visits are check-ups or annual or semiannual visits without any associated symptoms that can often be dealt with by a sympathetic and knowledgeable nurse. The patient is the winner to a great extent in this process by acquiring a sensitive ear and an expeditious visit. What is lost is the continued relationship of the patients and their physician. The biggest loss, I would suggest, is the doctor’s satisfaction of providing medical care that comes with every patient encounter, which keeps many of us energized to keep practicing medicine.
Now we have a new vision of how the primary care office of the future will function as a medical home (N. Engl. J. Med. 2012;367:891-3). In this vision, the physicians will be energized by a global payment system that will create an environment in which the doctor’s role is to pass real responsibility to their ancillary staff for which they would be held accountable. According to the authors, the physician’s office will be committed to promoting a healthy environment rather than merely treating disease. Why bother with the simple issue of treating sick patients when you can take on the entire environment of your community to prevent disease?
The authors go on to state that the physician would not waste time focusing on the "10% premature mortality that is influenced by medical treatment." In this work environment, the physician would be the team manager of a host of ancillary personnel, including medical assistants, RNs, social workers, nutritionists, and pharmacists, to name but a few. The physician would be energized by his or her role as a team leader. The physician, the authors explain, would see fewer patients and would not be caught running from room to room to see patients. Instead, he or she will become involved with care of the "community and understanding the upstream determinants of downstream sickness" and would spend there time in the community "advocating for the local farmer’s market to accept food stamps, organizing walking clubs for physical exercise, and lobbying ... to reduce emissions to improve air quality."
This, of course, is a far cry from the doctors who negotiated the care for their patient for a dozen eggs. It is clearly a role that is foreign to my generation. To some extent, though, patients may well gain in this futuristic environment. They will acquire an empathetic nurse who will be sensitive to their needs and who may be as good as a crotchety overworked doctor. All of the ancillary medical staff will gain a larger and more responsible role in the medical home. The physicians will morph into a new role that is more characteristic of an administrator and less as a practitioner. The doctors, however, will be the biggest losers as they disengage from the patient contact and care that is so crucial to the satisfaction of being a doctor.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The doctor’s office, at least my office, has changed over the last few decades with an increase in personnel added to make my life easier. Much of it has occurred as a response to the increased billing and authentication process that is required for reimbursement.
After all, when doctors were paid in cash or with a dozen eggs, there was little need for all the paperwork. Health insurance, both private and federal, has been the cause of much of this. At the same time, medical assistants, registered nurses, and a variety of ancillary staff have been added to make the patient’s visit smoother and to acquire the requisite information to satiate the vast network of communications that are generated with each office visit. All of these personnel are now an undisputable requirement for the function of today’s medical office.
In the process, the distance between the physician and the patient has increased. In many offices today, the patient may never see the doctor during the visit. To an increasing extent, the office contact with the patient is solely by an RN or physician assistant. In most cases, patients are satisfied with the service and are delighted not to spend a long time waiting to see the "doctor." Many of the visits are check-ups or annual or semiannual visits without any associated symptoms that can often be dealt with by a sympathetic and knowledgeable nurse. The patient is the winner to a great extent in this process by acquiring a sensitive ear and an expeditious visit. What is lost is the continued relationship of the patients and their physician. The biggest loss, I would suggest, is the doctor’s satisfaction of providing medical care that comes with every patient encounter, which keeps many of us energized to keep practicing medicine.
Now we have a new vision of how the primary care office of the future will function as a medical home (N. Engl. J. Med. 2012;367:891-3). In this vision, the physicians will be energized by a global payment system that will create an environment in which the doctor’s role is to pass real responsibility to their ancillary staff for which they would be held accountable. According to the authors, the physician’s office will be committed to promoting a healthy environment rather than merely treating disease. Why bother with the simple issue of treating sick patients when you can take on the entire environment of your community to prevent disease?
The authors go on to state that the physician would not waste time focusing on the "10% premature mortality that is influenced by medical treatment." In this work environment, the physician would be the team manager of a host of ancillary personnel, including medical assistants, RNs, social workers, nutritionists, and pharmacists, to name but a few. The physician would be energized by his or her role as a team leader. The physician, the authors explain, would see fewer patients and would not be caught running from room to room to see patients. Instead, he or she will become involved with care of the "community and understanding the upstream determinants of downstream sickness" and would spend there time in the community "advocating for the local farmer’s market to accept food stamps, organizing walking clubs for physical exercise, and lobbying ... to reduce emissions to improve air quality."
This, of course, is a far cry from the doctors who negotiated the care for their patient for a dozen eggs. It is clearly a role that is foreign to my generation. To some extent, though, patients may well gain in this futuristic environment. They will acquire an empathetic nurse who will be sensitive to their needs and who may be as good as a crotchety overworked doctor. All of the ancillary medical staff will gain a larger and more responsible role in the medical home. The physicians will morph into a new role that is more characteristic of an administrator and less as a practitioner. The doctors, however, will be the biggest losers as they disengage from the patient contact and care that is so crucial to the satisfaction of being a doctor.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The Images Are Great, But Do They Help?
The advances in cardiac imaging that have taken place in the last few years have provided amazing visualization of cardiac function in health and disease. Imaging has also enabled us to target areas of the heart for medical and surgical intervention.
The images are so slick that we have been known to e-mail them to our patients to show them how clever we are. I am told that they have been used to liven up cocktail parties. In a larger sense, however, few new concepts have emerged as a result of these imaging advances that physiologists and anatomists have not already elegantly described in the past.
We have been obsessed with the possibility that imaging of the heart and the coronary vessels would unlock the mysteries of acute coronary events and provide predictive information of subsequent myocardial infarction. The advances in computed tomography – first with the exercise electrocardiogram (with and without radiographic imaging), followed by coronary angiography, and most recently with CT coronary angiography – are only the most recent attempts to identify the culprit in this long-running quest for the triggers of acute coronary events.
And yet, the answer eludes us. Even when we were able to image the atherosclerotic plaque itself, we found that new events occurred in seemingly normal vessels. So it is not surprising that the ROMICAT II (Rule Out Myocardial Infarction II) study – the most recent study evaluating emergency department patients with acute chest pain using CT angiography – failed to provide any new insight into the diagnosis and prediction of the acute coronary syndrome. Compared with standard evaluation, CT angiography failed to show any clinical benefit other than shortening the average stay in the ED by 7.6 hours (which is unquestionably a quality benefit if your emergency department is anything like mine).
ROMICAT II did show that coronary events were rare in this highly selected patient population who were aged 40-74 years, had no history of coronary artery disease or ischemic electrocardiographic abnormalities, and had normal troponin assays. In the succeeding 28 days following emergency evaluation, there were no acute coronary events detected, and there were only eight adverse cardiac events observed.
Because of the unlikely occurrence of coronary events, these patients can best be dealt with in a nonemergency setting. Both CT angiography and standard testing led to further tests during the 28-day follow-up, including exercise echocardiograms (with or without nuclear imaging) and coronary angiography in roughly three-fourths of the patients. Revascularization was performed in 10% of the population.
So why are we even testing these patients and exposing them to all of the exigencies of ED and hospital admission? We are clearly not providing any service to them. At the same time, we are exposing them to increased radiation and the hazard of the testing procedures themselves. Some would say that the testing was driven by the risks of malpractice litigation. This study should provide some "cover" for that concern, which is undoubtedly real.
The continuing dependence on imaging technology to solve clinical problems has led to the numbing of our ability to perform cognitive processing of clinical data. Heart failure is no longer a clinical entity; it is an echocardiography image. The acute coronary syndrome is not a clinical syndrome, but rather an acquired image or blood test. Daily ward rounds have evolved into a hierarchical listing of the next imaging test to be performed on the patient in order to solve the clinical problem at hand. Consequently, the approach to the patient is no longer a quest to understand what is probable, but a search for the improbable.
A continuous barrage of publications in the medical and lay press has addressed the dollars wasted on imaging procedures, with seemingly little letup in the use of these technologies. Clearly, in the "zero-sum game" world of modern medicine, these costs will ultimately come out of physician’s income. Beyond that, we should realize that they add very little to the care of our patients and may actually add to their risks.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The advances in cardiac imaging that have taken place in the last few years have provided amazing visualization of cardiac function in health and disease. Imaging has also enabled us to target areas of the heart for medical and surgical intervention.
The images are so slick that we have been known to e-mail them to our patients to show them how clever we are. I am told that they have been used to liven up cocktail parties. In a larger sense, however, few new concepts have emerged as a result of these imaging advances that physiologists and anatomists have not already elegantly described in the past.
We have been obsessed with the possibility that imaging of the heart and the coronary vessels would unlock the mysteries of acute coronary events and provide predictive information of subsequent myocardial infarction. The advances in computed tomography – first with the exercise electrocardiogram (with and without radiographic imaging), followed by coronary angiography, and most recently with CT coronary angiography – are only the most recent attempts to identify the culprit in this long-running quest for the triggers of acute coronary events.
And yet, the answer eludes us. Even when we were able to image the atherosclerotic plaque itself, we found that new events occurred in seemingly normal vessels. So it is not surprising that the ROMICAT II (Rule Out Myocardial Infarction II) study – the most recent study evaluating emergency department patients with acute chest pain using CT angiography – failed to provide any new insight into the diagnosis and prediction of the acute coronary syndrome. Compared with standard evaluation, CT angiography failed to show any clinical benefit other than shortening the average stay in the ED by 7.6 hours (which is unquestionably a quality benefit if your emergency department is anything like mine).
ROMICAT II did show that coronary events were rare in this highly selected patient population who were aged 40-74 years, had no history of coronary artery disease or ischemic electrocardiographic abnormalities, and had normal troponin assays. In the succeeding 28 days following emergency evaluation, there were no acute coronary events detected, and there were only eight adverse cardiac events observed.
Because of the unlikely occurrence of coronary events, these patients can best be dealt with in a nonemergency setting. Both CT angiography and standard testing led to further tests during the 28-day follow-up, including exercise echocardiograms (with or without nuclear imaging) and coronary angiography in roughly three-fourths of the patients. Revascularization was performed in 10% of the population.
So why are we even testing these patients and exposing them to all of the exigencies of ED and hospital admission? We are clearly not providing any service to them. At the same time, we are exposing them to increased radiation and the hazard of the testing procedures themselves. Some would say that the testing was driven by the risks of malpractice litigation. This study should provide some "cover" for that concern, which is undoubtedly real.
The continuing dependence on imaging technology to solve clinical problems has led to the numbing of our ability to perform cognitive processing of clinical data. Heart failure is no longer a clinical entity; it is an echocardiography image. The acute coronary syndrome is not a clinical syndrome, but rather an acquired image or blood test. Daily ward rounds have evolved into a hierarchical listing of the next imaging test to be performed on the patient in order to solve the clinical problem at hand. Consequently, the approach to the patient is no longer a quest to understand what is probable, but a search for the improbable.
A continuous barrage of publications in the medical and lay press has addressed the dollars wasted on imaging procedures, with seemingly little letup in the use of these technologies. Clearly, in the "zero-sum game" world of modern medicine, these costs will ultimately come out of physician’s income. Beyond that, we should realize that they add very little to the care of our patients and may actually add to their risks.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The advances in cardiac imaging that have taken place in the last few years have provided amazing visualization of cardiac function in health and disease. Imaging has also enabled us to target areas of the heart for medical and surgical intervention.
The images are so slick that we have been known to e-mail them to our patients to show them how clever we are. I am told that they have been used to liven up cocktail parties. In a larger sense, however, few new concepts have emerged as a result of these imaging advances that physiologists and anatomists have not already elegantly described in the past.
We have been obsessed with the possibility that imaging of the heart and the coronary vessels would unlock the mysteries of acute coronary events and provide predictive information of subsequent myocardial infarction. The advances in computed tomography – first with the exercise electrocardiogram (with and without radiographic imaging), followed by coronary angiography, and most recently with CT coronary angiography – are only the most recent attempts to identify the culprit in this long-running quest for the triggers of acute coronary events.
And yet, the answer eludes us. Even when we were able to image the atherosclerotic plaque itself, we found that new events occurred in seemingly normal vessels. So it is not surprising that the ROMICAT II (Rule Out Myocardial Infarction II) study – the most recent study evaluating emergency department patients with acute chest pain using CT angiography – failed to provide any new insight into the diagnosis and prediction of the acute coronary syndrome. Compared with standard evaluation, CT angiography failed to show any clinical benefit other than shortening the average stay in the ED by 7.6 hours (which is unquestionably a quality benefit if your emergency department is anything like mine).
ROMICAT II did show that coronary events were rare in this highly selected patient population who were aged 40-74 years, had no history of coronary artery disease or ischemic electrocardiographic abnormalities, and had normal troponin assays. In the succeeding 28 days following emergency evaluation, there were no acute coronary events detected, and there were only eight adverse cardiac events observed.
Because of the unlikely occurrence of coronary events, these patients can best be dealt with in a nonemergency setting. Both CT angiography and standard testing led to further tests during the 28-day follow-up, including exercise echocardiograms (with or without nuclear imaging) and coronary angiography in roughly three-fourths of the patients. Revascularization was performed in 10% of the population.
So why are we even testing these patients and exposing them to all of the exigencies of ED and hospital admission? We are clearly not providing any service to them. At the same time, we are exposing them to increased radiation and the hazard of the testing procedures themselves. Some would say that the testing was driven by the risks of malpractice litigation. This study should provide some "cover" for that concern, which is undoubtedly real.
The continuing dependence on imaging technology to solve clinical problems has led to the numbing of our ability to perform cognitive processing of clinical data. Heart failure is no longer a clinical entity; it is an echocardiography image. The acute coronary syndrome is not a clinical syndrome, but rather an acquired image or blood test. Daily ward rounds have evolved into a hierarchical listing of the next imaging test to be performed on the patient in order to solve the clinical problem at hand. Consequently, the approach to the patient is no longer a quest to understand what is probable, but a search for the improbable.
A continuous barrage of publications in the medical and lay press has addressed the dollars wasted on imaging procedures, with seemingly little letup in the use of these technologies. Clearly, in the "zero-sum game" world of modern medicine, these costs will ultimately come out of physician’s income. Beyond that, we should realize that they add very little to the care of our patients and may actually add to their risks.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The Artificial Heart and LVADs
The permanent implantable cardiac heart pump was developed in the mid-20th century as an outgrowth of the success of heart-lung machines –which provided systemic support during cardiac arrest – required for valve replacement and later coronary bypass surgery.
The challenge to build a totally implantable heart has been the "holy grail" for almost half a century and occurred long before heart failure therapy was high on the agenda of cardiologists. It emerged as a result of the experimental perseverance and genius of Dr. Willem Kolff, who in 1957 was able to totally support dogs using an artificial heart device. Other surgeons, working in separate laboratories – including Dr. Adrian Kantrowitz, Dr. Denton Cooley, and Dr. Robert Jarvik – provided additional research support for the ultimate creation of the artificial heart. However, it wasn’t until 15 years later, in 1982, that Dr. Kolff captured the attention of the medical and lay press by supporting a Seattle dentist suffering from severe heart failure, Dr. Barney Clark, for 112 days with the heart that he and Dr. Jarvik had developed.
Since then, research on the totally implantable heart led to the approval in 2004 the SynCardia temporary Total Artificial Heart as a bridge to transplantation in patients with biventricular failure. In 2001, the first AbioCor totally implantable pump with an external power source was implanted. Initially approved by the FDA as a bridge to transplant for patients with biventricular failure, more recently it has been approved for patients with end stage heart failure as destination therapy.
As work went forward on the totally implantable heart, left ventricular assist devices (LVAD) were also being developed. The pharmacologic support of end stage left ventricular failure with vasodilators and inotropic agents has provided modest temporary benefit; but it has become obvious that we have reached a therapeutic wall with very few new medical options on the horizon. LVADs appeared to be our current best hope of providing additional short- and long-term support for the failing left ventricle.
Dr. E. Stanley Crawford and Dr. Domingo Liotta performed the first LVAD implant in 1966 in a patient who had cardiac arrest after surgery. Since then, there have been a variety of LVADs developed that were initially pulsatile, but now are more commonly continuous flow. Both types of devices are externally powered via drive lines and able to achieve flows up to 10 L/min and are interposed between a left ventricular apical conduit and an ascending aorta conduit. The initial LVADs were pulsatile devices based on the presumption that pulsatile flow was important for systemic perfusion and normal physiology. However, continuous-flow LVAD has proven to be quite compatible with normal organ function and perfusion, and shown better durability and lower mortality and morbidity compared to the pulsatile flow devices. (J. Am. Coll. Cardiol. 2011;57:1890-8).
In addition, as noted in "The Lead," LVADs have shown superiority over medical therapy in patients with advanced heart failure as destination therapy, and the 1-year mortality with continuous flow LVADs now approximates the experience with the 1-year mortality of patients receiving a heart transplant.
The expanded use of LVADs from creating a bridge for transplantation to destination therapy has opened an entirely new opportunity for the use of LVADs in the treatment of acute, but most importantly, chronic heart failure. The limitation of heart transplantation as a function of donor availability together with the limitation of medical therapy for heart failure patients has generated increased interest in LVADs for chronic therapy in patients with end-stage heart failure. The observation that in some patients, particularly those with reversible heart failure like myocarditis, the heart may actually recover during LVAD therapy and allow for its removal, provides a window into future clinical applications (N. Engl. J. Med. 2006:355;1873-84)
The potential for further miniaturization of these devices and the potential for total implantability also open new horizons for LVAD therapy. Total implantability hinges on the ability to apply technology of transcutaneous power source that is already available in a number of electronic implantable devices, including the total heart implant. The resolution of these technical issues will allow for further expansion of the clinical indications for LVAD therapy.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The permanent implantable cardiac heart pump was developed in the mid-20th century as an outgrowth of the success of heart-lung machines –which provided systemic support during cardiac arrest – required for valve replacement and later coronary bypass surgery.
The challenge to build a totally implantable heart has been the "holy grail" for almost half a century and occurred long before heart failure therapy was high on the agenda of cardiologists. It emerged as a result of the experimental perseverance and genius of Dr. Willem Kolff, who in 1957 was able to totally support dogs using an artificial heart device. Other surgeons, working in separate laboratories – including Dr. Adrian Kantrowitz, Dr. Denton Cooley, and Dr. Robert Jarvik – provided additional research support for the ultimate creation of the artificial heart. However, it wasn’t until 15 years later, in 1982, that Dr. Kolff captured the attention of the medical and lay press by supporting a Seattle dentist suffering from severe heart failure, Dr. Barney Clark, for 112 days with the heart that he and Dr. Jarvik had developed.
Since then, research on the totally implantable heart led to the approval in 2004 the SynCardia temporary Total Artificial Heart as a bridge to transplantation in patients with biventricular failure. In 2001, the first AbioCor totally implantable pump with an external power source was implanted. Initially approved by the FDA as a bridge to transplant for patients with biventricular failure, more recently it has been approved for patients with end stage heart failure as destination therapy.
As work went forward on the totally implantable heart, left ventricular assist devices (LVAD) were also being developed. The pharmacologic support of end stage left ventricular failure with vasodilators and inotropic agents has provided modest temporary benefit; but it has become obvious that we have reached a therapeutic wall with very few new medical options on the horizon. LVADs appeared to be our current best hope of providing additional short- and long-term support for the failing left ventricle.
Dr. E. Stanley Crawford and Dr. Domingo Liotta performed the first LVAD implant in 1966 in a patient who had cardiac arrest after surgery. Since then, there have been a variety of LVADs developed that were initially pulsatile, but now are more commonly continuous flow. Both types of devices are externally powered via drive lines and able to achieve flows up to 10 L/min and are interposed between a left ventricular apical conduit and an ascending aorta conduit. The initial LVADs were pulsatile devices based on the presumption that pulsatile flow was important for systemic perfusion and normal physiology. However, continuous-flow LVAD has proven to be quite compatible with normal organ function and perfusion, and shown better durability and lower mortality and morbidity compared to the pulsatile flow devices. (J. Am. Coll. Cardiol. 2011;57:1890-8).
In addition, as noted in "The Lead," LVADs have shown superiority over medical therapy in patients with advanced heart failure as destination therapy, and the 1-year mortality with continuous flow LVADs now approximates the experience with the 1-year mortality of patients receiving a heart transplant.
The expanded use of LVADs from creating a bridge for transplantation to destination therapy has opened an entirely new opportunity for the use of LVADs in the treatment of acute, but most importantly, chronic heart failure. The limitation of heart transplantation as a function of donor availability together with the limitation of medical therapy for heart failure patients has generated increased interest in LVADs for chronic therapy in patients with end-stage heart failure. The observation that in some patients, particularly those with reversible heart failure like myocarditis, the heart may actually recover during LVAD therapy and allow for its removal, provides a window into future clinical applications (N. Engl. J. Med. 2006:355;1873-84)
The potential for further miniaturization of these devices and the potential for total implantability also open new horizons for LVAD therapy. Total implantability hinges on the ability to apply technology of transcutaneous power source that is already available in a number of electronic implantable devices, including the total heart implant. The resolution of these technical issues will allow for further expansion of the clinical indications for LVAD therapy.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The permanent implantable cardiac heart pump was developed in the mid-20th century as an outgrowth of the success of heart-lung machines –which provided systemic support during cardiac arrest – required for valve replacement and later coronary bypass surgery.
The challenge to build a totally implantable heart has been the "holy grail" for almost half a century and occurred long before heart failure therapy was high on the agenda of cardiologists. It emerged as a result of the experimental perseverance and genius of Dr. Willem Kolff, who in 1957 was able to totally support dogs using an artificial heart device. Other surgeons, working in separate laboratories – including Dr. Adrian Kantrowitz, Dr. Denton Cooley, and Dr. Robert Jarvik – provided additional research support for the ultimate creation of the artificial heart. However, it wasn’t until 15 years later, in 1982, that Dr. Kolff captured the attention of the medical and lay press by supporting a Seattle dentist suffering from severe heart failure, Dr. Barney Clark, for 112 days with the heart that he and Dr. Jarvik had developed.
Since then, research on the totally implantable heart led to the approval in 2004 the SynCardia temporary Total Artificial Heart as a bridge to transplantation in patients with biventricular failure. In 2001, the first AbioCor totally implantable pump with an external power source was implanted. Initially approved by the FDA as a bridge to transplant for patients with biventricular failure, more recently it has been approved for patients with end stage heart failure as destination therapy.
As work went forward on the totally implantable heart, left ventricular assist devices (LVAD) were also being developed. The pharmacologic support of end stage left ventricular failure with vasodilators and inotropic agents has provided modest temporary benefit; but it has become obvious that we have reached a therapeutic wall with very few new medical options on the horizon. LVADs appeared to be our current best hope of providing additional short- and long-term support for the failing left ventricle.
Dr. E. Stanley Crawford and Dr. Domingo Liotta performed the first LVAD implant in 1966 in a patient who had cardiac arrest after surgery. Since then, there have been a variety of LVADs developed that were initially pulsatile, but now are more commonly continuous flow. Both types of devices are externally powered via drive lines and able to achieve flows up to 10 L/min and are interposed between a left ventricular apical conduit and an ascending aorta conduit. The initial LVADs were pulsatile devices based on the presumption that pulsatile flow was important for systemic perfusion and normal physiology. However, continuous-flow LVAD has proven to be quite compatible with normal organ function and perfusion, and shown better durability and lower mortality and morbidity compared to the pulsatile flow devices. (J. Am. Coll. Cardiol. 2011;57:1890-8).
In addition, as noted in "The Lead," LVADs have shown superiority over medical therapy in patients with advanced heart failure as destination therapy, and the 1-year mortality with continuous flow LVADs now approximates the experience with the 1-year mortality of patients receiving a heart transplant.
The expanded use of LVADs from creating a bridge for transplantation to destination therapy has opened an entirely new opportunity for the use of LVADs in the treatment of acute, but most importantly, chronic heart failure. The limitation of heart transplantation as a function of donor availability together with the limitation of medical therapy for heart failure patients has generated increased interest in LVADs for chronic therapy in patients with end-stage heart failure. The observation that in some patients, particularly those with reversible heart failure like myocarditis, the heart may actually recover during LVAD therapy and allow for its removal, provides a window into future clinical applications (N. Engl. J. Med. 2006:355;1873-84)
The potential for further miniaturization of these devices and the potential for total implantability also open new horizons for LVAD therapy. Total implantability hinges on the ability to apply technology of transcutaneous power source that is already available in a number of electronic implantable devices, including the total heart implant. The resolution of these technical issues will allow for further expansion of the clinical indications for LVAD therapy.
Dr. Goldstein, medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
What's the Dose?
Physicians struggle every day to pick the right drug dosage for the treatment and prevention of disease. For the acute illnesses, efficacy is evident within hours or days. For the prevention of chronic disease, however, the outcome is uncertain at best. Therefore, we rely on randomized clinical trials to provide evidence that a specific drug and dosage are safe and effective.
Unfortunately, because of the limited average follow-up of 3-5 years, randomized clinical trials (RCTs) do not provide efficacy and safety information for lifetime therapy that is often advocated for the prevention of chronic disease.
For both the patient and physician, the side effects become the deciding factor. The physician usually chooses the smallest dose in order to avoid toxicity and presumably to achieve some benefit. The patient takes the drug irregularly at best.
As an example, consider the appropriate dosage for statin therapy for the prevention of atherosclerotic cardiovascular disease. Although numerous RCTs have defined the effective dose of a number of statins, recent trends in therapeutics have advocated that rather than using the dose that was used in RCTs, clinicians should increase the dose in order to reach a specific LDL cholesterol blood level.
Choosing the dosage of a drug in an RCT is a less-than-perfect exercise. Here’s how it usually goes:
Phase I trials – often based on pharmacokinetic data derived from animal studies – examine the physiological characteristics of the drug in healthy human volunteers in order to determine an effective and safe dosage prior to a phase II trial.
Phase II trials are larger; they usually examine the effect of several different dosages on a target population, and are focused not on physiological effects but on clinical outcomes and safety, in order to choose the best dosage for a phase III study. Because of their small size, these phase II studies are underpowered and prone to providing misleading dose choices.
Nevertheless, one or two doses are chosen to be used in the definitive phase III RCT, which includes enough patients to provide proof of benefit and safety of the drug based solely on its effect on mortality and morbidity.
Information is often collected in regard to the physiological effects of the drug on, for example, LDL cholesterol (in the case of statins) or heart rate (in the case of beta-blocking drugs). The proof of benefit, however, is determined by clinical outcomes, not on the physiological or "surrogate" measurements.
In the process of designing an RCT, we often make presumptions about mechanisms and will identify certain parameters that theoretically provide insight into the presumed benefit. However, many of the drugs we use have physiological effects that extend beyond the specific therapeutic target. We often remain ignorant about the mechanism by which drugs express their benefit long after their proof of benefit is demonstrated.
Statins, for instance, have a variety of pleiotropic effects. One of the most interesting is their ability to modulate inflammation, a process that is thought to be central to the progression of atherosclerotic disease. Although we presume that their effect is on LDL cholesterol, that presumption may be incorrect. Similarly, beta-blockers have well-known effects on heart rate and blood pressure, but their effect on modulating the up-regulated sympathetic nervous system in heart failure has presumed importance well beyond their effect on heart rate and blood pressure.
It is tempting to make presumptions about the effect of a drug intervention on the basis of surrogate measures like heart rate or LDL cholesterol effects, but their mechanisms of action on mortality and morbidity of disease may be unrelated to that measure.
RCTs have come a long way from relying on "surrogate" end points as the basis for making therapeutic decisions. More than 20 years ago, the CAST (Cardiac Arrhythmia Suppression Trial) was the watershed RCT that excluded the surrogate as a measure of therapeutic efficacy (J. Am. Coll. Cardiol. 1991;18:14-9). At a time when ventricular premature contraction (VPC) suppression was the "mantra" to prevent sudden death, CAST examined the pharmacologic suppression of VPCs in post–MI patients and found that, as the drugs decreased ventricular ectopy, mortality increased.
The use of the seemingly appropriate and obvious "surrogate" of LDL cholesterol lowering as a measure of therapeutic efficacy may be just as illusory. As enticing as surrogates are, the contemporary drive to lower LDL cholesterol may be as misdirected as the target to decrease the frequency of VPCs to prevent sudden death.
Like many things in life and science, things may not be what they seem.
Dr. Goldstein, the medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Physicians struggle every day to pick the right drug dosage for the treatment and prevention of disease. For the acute illnesses, efficacy is evident within hours or days. For the prevention of chronic disease, however, the outcome is uncertain at best. Therefore, we rely on randomized clinical trials to provide evidence that a specific drug and dosage are safe and effective.
Unfortunately, because of the limited average follow-up of 3-5 years, randomized clinical trials (RCTs) do not provide efficacy and safety information for lifetime therapy that is often advocated for the prevention of chronic disease.
For both the patient and physician, the side effects become the deciding factor. The physician usually chooses the smallest dose in order to avoid toxicity and presumably to achieve some benefit. The patient takes the drug irregularly at best.
As an example, consider the appropriate dosage for statin therapy for the prevention of atherosclerotic cardiovascular disease. Although numerous RCTs have defined the effective dose of a number of statins, recent trends in therapeutics have advocated that rather than using the dose that was used in RCTs, clinicians should increase the dose in order to reach a specific LDL cholesterol blood level.
Choosing the dosage of a drug in an RCT is a less-than-perfect exercise. Here’s how it usually goes:
Phase I trials – often based on pharmacokinetic data derived from animal studies – examine the physiological characteristics of the drug in healthy human volunteers in order to determine an effective and safe dosage prior to a phase II trial.
Phase II trials are larger; they usually examine the effect of several different dosages on a target population, and are focused not on physiological effects but on clinical outcomes and safety, in order to choose the best dosage for a phase III study. Because of their small size, these phase II studies are underpowered and prone to providing misleading dose choices.
Nevertheless, one or two doses are chosen to be used in the definitive phase III RCT, which includes enough patients to provide proof of benefit and safety of the drug based solely on its effect on mortality and morbidity.
Information is often collected in regard to the physiological effects of the drug on, for example, LDL cholesterol (in the case of statins) or heart rate (in the case of beta-blocking drugs). The proof of benefit, however, is determined by clinical outcomes, not on the physiological or "surrogate" measurements.
In the process of designing an RCT, we often make presumptions about mechanisms and will identify certain parameters that theoretically provide insight into the presumed benefit. However, many of the drugs we use have physiological effects that extend beyond the specific therapeutic target. We often remain ignorant about the mechanism by which drugs express their benefit long after their proof of benefit is demonstrated.
Statins, for instance, have a variety of pleiotropic effects. One of the most interesting is their ability to modulate inflammation, a process that is thought to be central to the progression of atherosclerotic disease. Although we presume that their effect is on LDL cholesterol, that presumption may be incorrect. Similarly, beta-blockers have well-known effects on heart rate and blood pressure, but their effect on modulating the up-regulated sympathetic nervous system in heart failure has presumed importance well beyond their effect on heart rate and blood pressure.
It is tempting to make presumptions about the effect of a drug intervention on the basis of surrogate measures like heart rate or LDL cholesterol effects, but their mechanisms of action on mortality and morbidity of disease may be unrelated to that measure.
RCTs have come a long way from relying on "surrogate" end points as the basis for making therapeutic decisions. More than 20 years ago, the CAST (Cardiac Arrhythmia Suppression Trial) was the watershed RCT that excluded the surrogate as a measure of therapeutic efficacy (J. Am. Coll. Cardiol. 1991;18:14-9). At a time when ventricular premature contraction (VPC) suppression was the "mantra" to prevent sudden death, CAST examined the pharmacologic suppression of VPCs in post–MI patients and found that, as the drugs decreased ventricular ectopy, mortality increased.
The use of the seemingly appropriate and obvious "surrogate" of LDL cholesterol lowering as a measure of therapeutic efficacy may be just as illusory. As enticing as surrogates are, the contemporary drive to lower LDL cholesterol may be as misdirected as the target to decrease the frequency of VPCs to prevent sudden death.
Like many things in life and science, things may not be what they seem.
Dr. Goldstein, the medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Physicians struggle every day to pick the right drug dosage for the treatment and prevention of disease. For the acute illnesses, efficacy is evident within hours or days. For the prevention of chronic disease, however, the outcome is uncertain at best. Therefore, we rely on randomized clinical trials to provide evidence that a specific drug and dosage are safe and effective.
Unfortunately, because of the limited average follow-up of 3-5 years, randomized clinical trials (RCTs) do not provide efficacy and safety information for lifetime therapy that is often advocated for the prevention of chronic disease.
For both the patient and physician, the side effects become the deciding factor. The physician usually chooses the smallest dose in order to avoid toxicity and presumably to achieve some benefit. The patient takes the drug irregularly at best.
As an example, consider the appropriate dosage for statin therapy for the prevention of atherosclerotic cardiovascular disease. Although numerous RCTs have defined the effective dose of a number of statins, recent trends in therapeutics have advocated that rather than using the dose that was used in RCTs, clinicians should increase the dose in order to reach a specific LDL cholesterol blood level.
Choosing the dosage of a drug in an RCT is a less-than-perfect exercise. Here’s how it usually goes:
Phase I trials – often based on pharmacokinetic data derived from animal studies – examine the physiological characteristics of the drug in healthy human volunteers in order to determine an effective and safe dosage prior to a phase II trial.
Phase II trials are larger; they usually examine the effect of several different dosages on a target population, and are focused not on physiological effects but on clinical outcomes and safety, in order to choose the best dosage for a phase III study. Because of their small size, these phase II studies are underpowered and prone to providing misleading dose choices.
Nevertheless, one or two doses are chosen to be used in the definitive phase III RCT, which includes enough patients to provide proof of benefit and safety of the drug based solely on its effect on mortality and morbidity.
Information is often collected in regard to the physiological effects of the drug on, for example, LDL cholesterol (in the case of statins) or heart rate (in the case of beta-blocking drugs). The proof of benefit, however, is determined by clinical outcomes, not on the physiological or "surrogate" measurements.
In the process of designing an RCT, we often make presumptions about mechanisms and will identify certain parameters that theoretically provide insight into the presumed benefit. However, many of the drugs we use have physiological effects that extend beyond the specific therapeutic target. We often remain ignorant about the mechanism by which drugs express their benefit long after their proof of benefit is demonstrated.
Statins, for instance, have a variety of pleiotropic effects. One of the most interesting is their ability to modulate inflammation, a process that is thought to be central to the progression of atherosclerotic disease. Although we presume that their effect is on LDL cholesterol, that presumption may be incorrect. Similarly, beta-blockers have well-known effects on heart rate and blood pressure, but their effect on modulating the up-regulated sympathetic nervous system in heart failure has presumed importance well beyond their effect on heart rate and blood pressure.
It is tempting to make presumptions about the effect of a drug intervention on the basis of surrogate measures like heart rate or LDL cholesterol effects, but their mechanisms of action on mortality and morbidity of disease may be unrelated to that measure.
RCTs have come a long way from relying on "surrogate" end points as the basis for making therapeutic decisions. More than 20 years ago, the CAST (Cardiac Arrhythmia Suppression Trial) was the watershed RCT that excluded the surrogate as a measure of therapeutic efficacy (J. Am. Coll. Cardiol. 1991;18:14-9). At a time when ventricular premature contraction (VPC) suppression was the "mantra" to prevent sudden death, CAST examined the pharmacologic suppression of VPCs in post–MI patients and found that, as the drugs decreased ventricular ectopy, mortality increased.
The use of the seemingly appropriate and obvious "surrogate" of LDL cholesterol lowering as a measure of therapeutic efficacy may be just as illusory. As enticing as surrogates are, the contemporary drive to lower LDL cholesterol may be as misdirected as the target to decrease the frequency of VPCs to prevent sudden death.
Like many things in life and science, things may not be what they seem.
Dr. Goldstein, the medical editor of Cardiology News, is a professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Measuring Quality of Care
The measurement of quality of care has been the mantra of health policy care for the past decade, and has become as American as apple pie and Chevrolet. Yet there have been few data showing that the institution of quality of care guidelines has had any impact on mortality or morbidity.
Despite this lack of data, hospitals are being financially rewarded or penalized based on their ability to meet guidelines established by the Center for Medicare and Medicaid Services in conjunction with the American College of Cardiology and the American Heart Association. Two recent reports provide insight on the progress we have achieved with guidelines in heart failure and in instituting the shortening of the door-to-balloon time (D2B) for percutaneous coronary artery intervention (PCI) in ST-segment elevation MI.
Decreasing heart failure readmission within 30 days, which occurs in approximately one-third of hospitalized patients, has become a target for the quality improvement process. Using the "Get With the Guidelines Heart Failure" registry, a recent analysis indicates that there is a very poor correlation between the achievement or those standards and the 30 day mortality and readmission rate (Circulation 2011;124:712-9).
The guidelines include measurement of cardiac function, application of the usual heart failure medications, and discharge instructions. Data were collected in almost 20,000 patients in 153 hospitals during 2005. Adherence to these guidelines was quite good and was achieved in more than 75% of the hospitals, yet it was unrelated to the 30 day mortality or hospital readmission.
The authors emphasized that the factors that affect survival and readmission are very heterogeneous. Basing pay-for-performance standards on a single measure (such as readmission rates) may penalize institutions that face impediments that are unrelated to performance measurements. Penalizing hospitals that have high readmission rates as a result of a large populations of vulnerable patients may penalize institutions that actually could benefit from more resources in order to achieve better outcomes.
The effectiveness of PCI, when it is performed in less than 90 minutes in STEMI patients, has been supported by clinical data from selected cardiac centers. The application to the larger patient population of the guideline to shorten D2B time to less than 90 minutes has been championed by the ACC, which launched the D2B Alliance in 2006 and by the AHA in 2007 with its Mission: Lifeline program.
The success of these efforts was reported in August (Circulation 2011;124:1038-45) and indicates that in a selected group of CMS-reporting hospitals, D2B time decreased from 96 minutes in 2005 to 64 minutes in 2010. In addition, the percentage of patients with a D2B time of less than 90 minutes increased from 44% to 91%, and that of patients with D2B of less than 75 minutes rose from 27% to 70%. The success of this effort is to be applauded, but the report is striking for its absence of any information regarding outcomes of the shortened D2B time. Unfortunately, there is little outcome information available, with the exception of data from Michigan on all Medicare providers in that state, which indicates that although D2B time decreased by 90 minutes, there was no significant benefit.
Measurement of quality remains elusive, in spite of the good intentions of physicians and health planners to use a variety of seemingly beneficial criteria for its definition.
As consumers, we know that quality is not easy to measure. Most of us can compare the quality of American automobiles vs. their foreign competitors by "kicking the tires," that is, by doing a little research. But even with this knowledge, we are not always sure that the particular car we buy will be better or last longer. Health care faces the same problem. Establishing quality care measurements will require a great deal of further research before we can reward or penalize hospitals and physicians for their performance.
It is possible that in our zeal to measure what we can, we are confusing process with content. How to put a number on the performance that leads to quality remains uncertain using our current methodology.-
Dr. Sidney Goldstein is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The measurement of quality of care has been the mantra of health policy care for the past decade, and has become as American as apple pie and Chevrolet. Yet there have been few data showing that the institution of quality of care guidelines has had any impact on mortality or morbidity.
Despite this lack of data, hospitals are being financially rewarded or penalized based on their ability to meet guidelines established by the Center for Medicare and Medicaid Services in conjunction with the American College of Cardiology and the American Heart Association. Two recent reports provide insight on the progress we have achieved with guidelines in heart failure and in instituting the shortening of the door-to-balloon time (D2B) for percutaneous coronary artery intervention (PCI) in ST-segment elevation MI.
Decreasing heart failure readmission within 30 days, which occurs in approximately one-third of hospitalized patients, has become a target for the quality improvement process. Using the "Get With the Guidelines Heart Failure" registry, a recent analysis indicates that there is a very poor correlation between the achievement or those standards and the 30 day mortality and readmission rate (Circulation 2011;124:712-9).
The guidelines include measurement of cardiac function, application of the usual heart failure medications, and discharge instructions. Data were collected in almost 20,000 patients in 153 hospitals during 2005. Adherence to these guidelines was quite good and was achieved in more than 75% of the hospitals, yet it was unrelated to the 30 day mortality or hospital readmission.
The authors emphasized that the factors that affect survival and readmission are very heterogeneous. Basing pay-for-performance standards on a single measure (such as readmission rates) may penalize institutions that face impediments that are unrelated to performance measurements. Penalizing hospitals that have high readmission rates as a result of a large populations of vulnerable patients may penalize institutions that actually could benefit from more resources in order to achieve better outcomes.
The effectiveness of PCI, when it is performed in less than 90 minutes in STEMI patients, has been supported by clinical data from selected cardiac centers. The application to the larger patient population of the guideline to shorten D2B time to less than 90 minutes has been championed by the ACC, which launched the D2B Alliance in 2006 and by the AHA in 2007 with its Mission: Lifeline program.
The success of these efforts was reported in August (Circulation 2011;124:1038-45) and indicates that in a selected group of CMS-reporting hospitals, D2B time decreased from 96 minutes in 2005 to 64 minutes in 2010. In addition, the percentage of patients with a D2B time of less than 90 minutes increased from 44% to 91%, and that of patients with D2B of less than 75 minutes rose from 27% to 70%. The success of this effort is to be applauded, but the report is striking for its absence of any information regarding outcomes of the shortened D2B time. Unfortunately, there is little outcome information available, with the exception of data from Michigan on all Medicare providers in that state, which indicates that although D2B time decreased by 90 minutes, there was no significant benefit.
Measurement of quality remains elusive, in spite of the good intentions of physicians and health planners to use a variety of seemingly beneficial criteria for its definition.
As consumers, we know that quality is not easy to measure. Most of us can compare the quality of American automobiles vs. their foreign competitors by "kicking the tires," that is, by doing a little research. But even with this knowledge, we are not always sure that the particular car we buy will be better or last longer. Health care faces the same problem. Establishing quality care measurements will require a great deal of further research before we can reward or penalize hospitals and physicians for their performance.
It is possible that in our zeal to measure what we can, we are confusing process with content. How to put a number on the performance that leads to quality remains uncertain using our current methodology.-
Dr. Sidney Goldstein is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The measurement of quality of care has been the mantra of health policy care for the past decade, and has become as American as apple pie and Chevrolet. Yet there have been few data showing that the institution of quality of care guidelines has had any impact on mortality or morbidity.
Despite this lack of data, hospitals are being financially rewarded or penalized based on their ability to meet guidelines established by the Center for Medicare and Medicaid Services in conjunction with the American College of Cardiology and the American Heart Association. Two recent reports provide insight on the progress we have achieved with guidelines in heart failure and in instituting the shortening of the door-to-balloon time (D2B) for percutaneous coronary artery intervention (PCI) in ST-segment elevation MI.
Decreasing heart failure readmission within 30 days, which occurs in approximately one-third of hospitalized patients, has become a target for the quality improvement process. Using the "Get With the Guidelines Heart Failure" registry, a recent analysis indicates that there is a very poor correlation between the achievement or those standards and the 30 day mortality and readmission rate (Circulation 2011;124:712-9).
The guidelines include measurement of cardiac function, application of the usual heart failure medications, and discharge instructions. Data were collected in almost 20,000 patients in 153 hospitals during 2005. Adherence to these guidelines was quite good and was achieved in more than 75% of the hospitals, yet it was unrelated to the 30 day mortality or hospital readmission.
The authors emphasized that the factors that affect survival and readmission are very heterogeneous. Basing pay-for-performance standards on a single measure (such as readmission rates) may penalize institutions that face impediments that are unrelated to performance measurements. Penalizing hospitals that have high readmission rates as a result of a large populations of vulnerable patients may penalize institutions that actually could benefit from more resources in order to achieve better outcomes.
The effectiveness of PCI, when it is performed in less than 90 minutes in STEMI patients, has been supported by clinical data from selected cardiac centers. The application to the larger patient population of the guideline to shorten D2B time to less than 90 minutes has been championed by the ACC, which launched the D2B Alliance in 2006 and by the AHA in 2007 with its Mission: Lifeline program.
The success of these efforts was reported in August (Circulation 2011;124:1038-45) and indicates that in a selected group of CMS-reporting hospitals, D2B time decreased from 96 minutes in 2005 to 64 minutes in 2010. In addition, the percentage of patients with a D2B time of less than 90 minutes increased from 44% to 91%, and that of patients with D2B of less than 75 minutes rose from 27% to 70%. The success of this effort is to be applauded, but the report is striking for its absence of any information regarding outcomes of the shortened D2B time. Unfortunately, there is little outcome information available, with the exception of data from Michigan on all Medicare providers in that state, which indicates that although D2B time decreased by 90 minutes, there was no significant benefit.
Measurement of quality remains elusive, in spite of the good intentions of physicians and health planners to use a variety of seemingly beneficial criteria for its definition.
As consumers, we know that quality is not easy to measure. Most of us can compare the quality of American automobiles vs. their foreign competitors by "kicking the tires," that is, by doing a little research. But even with this knowledge, we are not always sure that the particular car we buy will be better or last longer. Health care faces the same problem. Establishing quality care measurements will require a great deal of further research before we can reward or penalize hospitals and physicians for their performance.
It is possible that in our zeal to measure what we can, we are confusing process with content. How to put a number on the performance that leads to quality remains uncertain using our current methodology.-
Dr. Sidney Goldstein is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.