User login
Atrial Fibrillation and ICDs
The confluence of atrial fibrillation and an implanted cardioverter defibrillator for primary prevention of sudden death in patients with chronic heart failure is coming under increased scrutiny as a cause for inappropriate shocks. Current guidelines advise the use of an ICD or CRT-D (cardiac resynchronization therapy–defibrillator) in chronic heart failure patients with an ejection fraction of less than 35%.
It is well known that atrial fibrillation (AF) is a common occurrence in heart failure patients. AF is the most common reason for inappropriate ICD shocks, which comprise 20%–25% of all ICD discharges (J. Am. Coll. Cardiol. 2008;51:1357-65).
Inappropriate shocks have as yet been of little concern other than a cause of transitory discomfort to the patient and its impact on the patient's quality of life. However, recent follow-up examinations of patients suggest that inappropriate shocks in heart failure are more frequent in patients who also have AF, and that such patients have an increased mortality risk.
In a study of AF in ICD patients, 85% of whom were implanted for primary prevention, 27% had either paroxysmal or chronic persistent AF at the time of implantation (J. Am. Coll. Cardiol. 2010; 55:879-85). During the 3 years of follow-up, 4% developed new AF. The incidence of inappropriate shock was twice as frequent in patients with AF, compared with those with normal sinus rhythm. Patients with persistent AF experienced a significant, twofold increase in mortality, compared with patients in sinus rhythm. In the Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) using a single-lead defibrillator, appropriate shocks occurred in 48% of patients and inappropriate shocks comprised 32% of defibrillator shocks. Patients receiving either appropriate shocks for ventricular tachyarrhythmias or inappropriate shocks had an increased risk of mortality (N. Engl. J. Med. 2008;359:1009-17).
Inexpert programming may be a major cause of inappropriate shocks, according to a retrospective analysis of 89,000 patients that focused on the physiologic causes of inappropriate shocks in patients with new-onset or chronic AF presented by Dr. Bruce Wilkoff at the annual meeting of the Heart Rhythm Society. Dr. Wilkoff, of the Cleveland Clinic Foundation, suggests that the occurrence of inappropriate shocks is largely related to a low heart rate threshold for the discharge of a shock (more than 180 bpm), a rate that often encompasses the rate of AF, making it unlikely that the ICD will discriminate that rhythm from slow ventricular tachycardia or fibrillation, thus leading to inappropriate shocks. As the device comes out of the box it is preprogrammed at 180 bpm, and according to Dr. Wilkoff the implanter rarely adjusts it. This is probably more likely to occur in patients implanted by nonelectrophysiologists, which occurs in 25% of instances. He reported that in this retrospective study, increasing the rate threshold can decrease the frequency of inappropriate shocks by 17%–28%. He further makes a plea to adjust the threshold of ICD discharge to more than 200 bpm at the time of implantation (
There is some suggestion that the single-chamber ICD, the most commonly implanted ICD for primary prevention in heart failure patients, is not as sensitive to the recognition of supraventricular tachycardia as is a dual-chamber ICD, and therefore more likely to lead to an inappropriate shock (Circulation 2007;115:9-16). The MADIT-RIT trial is currently recruiting patients in a randomized trial using a dual-chamber ICD comparing standard programming to higher rate cutoff or a longer discharge delay or both on the frequency of inappropriate shocks. An additional objective of the trial will be to examine if these changes will affect mortality and morbidity.
The cause of heart failure progression in patients is uncertain and difficult to predict. It is possible that the development of AF is itself a marker for progression. On the other hand it is possible that ICD discharge may lead to further tissue loss or the generation of new or lethal arrhythmias in a previously compromised ventricle. Evidence for these possibilities is lacking. MADIT-RIT may throw more light on the subject.
After years of effort to expand the number of patients receiving ICDs, device manufacturers are now turning their efforts to making the devices safer. Modification of triggering thresholds can go a long way to making the life of the implanted patient more comfortable. Whether these modifications will improve survival is yet to be seen.
The confluence of atrial fibrillation and an implanted cardioverter defibrillator for primary prevention of sudden death in patients with chronic heart failure is coming under increased scrutiny as a cause for inappropriate shocks. Current guidelines advise the use of an ICD or CRT-D (cardiac resynchronization therapy–defibrillator) in chronic heart failure patients with an ejection fraction of less than 35%.
It is well known that atrial fibrillation (AF) is a common occurrence in heart failure patients. AF is the most common reason for inappropriate ICD shocks, which comprise 20%–25% of all ICD discharges (J. Am. Coll. Cardiol. 2008;51:1357-65).
Inappropriate shocks have as yet been of little concern other than a cause of transitory discomfort to the patient and its impact on the patient's quality of life. However, recent follow-up examinations of patients suggest that inappropriate shocks in heart failure are more frequent in patients who also have AF, and that such patients have an increased mortality risk.
In a study of AF in ICD patients, 85% of whom were implanted for primary prevention, 27% had either paroxysmal or chronic persistent AF at the time of implantation (J. Am. Coll. Cardiol. 2010; 55:879-85). During the 3 years of follow-up, 4% developed new AF. The incidence of inappropriate shock was twice as frequent in patients with AF, compared with those with normal sinus rhythm. Patients with persistent AF experienced a significant, twofold increase in mortality, compared with patients in sinus rhythm. In the Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) using a single-lead defibrillator, appropriate shocks occurred in 48% of patients and inappropriate shocks comprised 32% of defibrillator shocks. Patients receiving either appropriate shocks for ventricular tachyarrhythmias or inappropriate shocks had an increased risk of mortality (N. Engl. J. Med. 2008;359:1009-17).
Inexpert programming may be a major cause of inappropriate shocks, according to a retrospective analysis of 89,000 patients that focused on the physiologic causes of inappropriate shocks in patients with new-onset or chronic AF presented by Dr. Bruce Wilkoff at the annual meeting of the Heart Rhythm Society. Dr. Wilkoff, of the Cleveland Clinic Foundation, suggests that the occurrence of inappropriate shocks is largely related to a low heart rate threshold for the discharge of a shock (more than 180 bpm), a rate that often encompasses the rate of AF, making it unlikely that the ICD will discriminate that rhythm from slow ventricular tachycardia or fibrillation, thus leading to inappropriate shocks. As the device comes out of the box it is preprogrammed at 180 bpm, and according to Dr. Wilkoff the implanter rarely adjusts it. This is probably more likely to occur in patients implanted by nonelectrophysiologists, which occurs in 25% of instances. He reported that in this retrospective study, increasing the rate threshold can decrease the frequency of inappropriate shocks by 17%–28%. He further makes a plea to adjust the threshold of ICD discharge to more than 200 bpm at the time of implantation (
There is some suggestion that the single-chamber ICD, the most commonly implanted ICD for primary prevention in heart failure patients, is not as sensitive to the recognition of supraventricular tachycardia as is a dual-chamber ICD, and therefore more likely to lead to an inappropriate shock (Circulation 2007;115:9-16). The MADIT-RIT trial is currently recruiting patients in a randomized trial using a dual-chamber ICD comparing standard programming to higher rate cutoff or a longer discharge delay or both on the frequency of inappropriate shocks. An additional objective of the trial will be to examine if these changes will affect mortality and morbidity.
The cause of heart failure progression in patients is uncertain and difficult to predict. It is possible that the development of AF is itself a marker for progression. On the other hand it is possible that ICD discharge may lead to further tissue loss or the generation of new or lethal arrhythmias in a previously compromised ventricle. Evidence for these possibilities is lacking. MADIT-RIT may throw more light on the subject.
After years of effort to expand the number of patients receiving ICDs, device manufacturers are now turning their efforts to making the devices safer. Modification of triggering thresholds can go a long way to making the life of the implanted patient more comfortable. Whether these modifications will improve survival is yet to be seen.
The confluence of atrial fibrillation and an implanted cardioverter defibrillator for primary prevention of sudden death in patients with chronic heart failure is coming under increased scrutiny as a cause for inappropriate shocks. Current guidelines advise the use of an ICD or CRT-D (cardiac resynchronization therapy–defibrillator) in chronic heart failure patients with an ejection fraction of less than 35%.
It is well known that atrial fibrillation (AF) is a common occurrence in heart failure patients. AF is the most common reason for inappropriate ICD shocks, which comprise 20%–25% of all ICD discharges (J. Am. Coll. Cardiol. 2008;51:1357-65).
Inappropriate shocks have as yet been of little concern other than a cause of transitory discomfort to the patient and its impact on the patient's quality of life. However, recent follow-up examinations of patients suggest that inappropriate shocks in heart failure are more frequent in patients who also have AF, and that such patients have an increased mortality risk.
In a study of AF in ICD patients, 85% of whom were implanted for primary prevention, 27% had either paroxysmal or chronic persistent AF at the time of implantation (J. Am. Coll. Cardiol. 2010; 55:879-85). During the 3 years of follow-up, 4% developed new AF. The incidence of inappropriate shock was twice as frequent in patients with AF, compared with those with normal sinus rhythm. Patients with persistent AF experienced a significant, twofold increase in mortality, compared with patients in sinus rhythm. In the Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) using a single-lead defibrillator, appropriate shocks occurred in 48% of patients and inappropriate shocks comprised 32% of defibrillator shocks. Patients receiving either appropriate shocks for ventricular tachyarrhythmias or inappropriate shocks had an increased risk of mortality (N. Engl. J. Med. 2008;359:1009-17).
Inexpert programming may be a major cause of inappropriate shocks, according to a retrospective analysis of 89,000 patients that focused on the physiologic causes of inappropriate shocks in patients with new-onset or chronic AF presented by Dr. Bruce Wilkoff at the annual meeting of the Heart Rhythm Society. Dr. Wilkoff, of the Cleveland Clinic Foundation, suggests that the occurrence of inappropriate shocks is largely related to a low heart rate threshold for the discharge of a shock (more than 180 bpm), a rate that often encompasses the rate of AF, making it unlikely that the ICD will discriminate that rhythm from slow ventricular tachycardia or fibrillation, thus leading to inappropriate shocks. As the device comes out of the box it is preprogrammed at 180 bpm, and according to Dr. Wilkoff the implanter rarely adjusts it. This is probably more likely to occur in patients implanted by nonelectrophysiologists, which occurs in 25% of instances. He reported that in this retrospective study, increasing the rate threshold can decrease the frequency of inappropriate shocks by 17%–28%. He further makes a plea to adjust the threshold of ICD discharge to more than 200 bpm at the time of implantation (
There is some suggestion that the single-chamber ICD, the most commonly implanted ICD for primary prevention in heart failure patients, is not as sensitive to the recognition of supraventricular tachycardia as is a dual-chamber ICD, and therefore more likely to lead to an inappropriate shock (Circulation 2007;115:9-16). The MADIT-RIT trial is currently recruiting patients in a randomized trial using a dual-chamber ICD comparing standard programming to higher rate cutoff or a longer discharge delay or both on the frequency of inappropriate shocks. An additional objective of the trial will be to examine if these changes will affect mortality and morbidity.
The cause of heart failure progression in patients is uncertain and difficult to predict. It is possible that the development of AF is itself a marker for progression. On the other hand it is possible that ICD discharge may lead to further tissue loss or the generation of new or lethal arrhythmias in a previously compromised ventricle. Evidence for these possibilities is lacking. MADIT-RIT may throw more light on the subject.
After years of effort to expand the number of patients receiving ICDs, device manufacturers are now turning their efforts to making the devices safer. Modification of triggering thresholds can go a long way to making the life of the implanted patient more comfortable. Whether these modifications will improve survival is yet to be seen.
Heart and Kidney Transplantation
The comorbidity of heart failure and kidney failure poses a therapeutic dilemma for both cardiologists and nephrologists and has become a more important problem in managing an aging population. Many of the drugs used to treat heart failure have adverse effects on renal function, and chronic heart failure patients poorly tolerate chronic dialysis.
The development of left ventricular assist devices (LVADs) has expanded the therapeutic options available for the treatment of advanced heart failure, but their use has resulted in many LVAD patients experiencing progressive renal failure. As a result, more patients are going on to combined LVAD and dialysis (LVAD-D) therapy and becoming candidates for combined heart and kidney transplantation (HKT). The creation of this new chronic cardiorenal population poses important logistic and societal challenges.
There is very little information available to estimate the benefit of chronic device support or HKT, but the potential issues associated with outcome in patients with chronic left ventricular dysfunction speak to the need to consider the potential benefit of this class of therapy. Chronic renal dysfunction is a well-recognized comorbidity in heart failure patients, but until the development of LVADs, heart transplantation was an unlikely outcome. In a survey of almost 20,000 heart transplant recipients reported in the United Network for Organ Sharing database prior to December 2005, only 1.4% received both a heart and a kidney transplant (Arch. Surg. 2009;144:241-6). This was mainly because advanced renal disease has been an exclusion criteria for heart transplantation alone.
The wider application of LVADs as chronic destinations therapy and as a bridge to transplantation has made combined LVAD-D a hitherto ignored option for heart failure patients. In some cases this dual support is a planned therapeutic course. In others, it has become a matter of salvage, when renal failure occurs as a complication of LVAD implantation necessitating acute and chronic dialysis.
According to the UNOS report, prior to 2005, that is, before the wider use of LVADs, 12% of patients receiving HKT were on an LVAD at the time of HKT and 56% were on chronic dialysis. The authors developed a risk score, driven largely by the presence of peripheral vascular disease, age, the use of renal dialysis, and the need for LVAD support. The 1-year survival rate in the 274 patients receiving HKT varied from 93% in the low-risk group to 62% in the high-risk group. The 1-year risk in the high-risk group was four times that of the low-risk group.
HKT can be performed simultaneously or sequentially. Small series from simultaneous single institutions reported an operative mortality of 21% with a 5-year survival of 66% (Am. J. Transpl. 2001;1:89-92).
The benefit of this dual approach to heart failure therapy must be compared to the benefit of each organ transplantation alone. Survival benefit for heart-alone and kidney-alone transplantation now exceeds 10 years. Of concern however is the 3.8% annual mortality rate, a threefold increase since 1995, of HT patients waiting for a kidney transplant.
The relative paucity of both kidney and heart donors demands that the mortality for dual therapy must be measured against that standard. But multi-organ transplantation deprives one needy patient from a precious organ and does little to expand the availability of organ transplantation to a larger population. Nevertheless, the comorbidity of heart failure and renal failure remains a major issue in the management of the chronic heart failure patient and almost certainly will lead to a greater use of LVAD-D and HKT.
The comorbidity of heart failure and kidney failure poses a therapeutic dilemma for both cardiologists and nephrologists and has become a more important problem in managing an aging population. Many of the drugs used to treat heart failure have adverse effects on renal function, and chronic heart failure patients poorly tolerate chronic dialysis.
The development of left ventricular assist devices (LVADs) has expanded the therapeutic options available for the treatment of advanced heart failure, but their use has resulted in many LVAD patients experiencing progressive renal failure. As a result, more patients are going on to combined LVAD and dialysis (LVAD-D) therapy and becoming candidates for combined heart and kidney transplantation (HKT). The creation of this new chronic cardiorenal population poses important logistic and societal challenges.
There is very little information available to estimate the benefit of chronic device support or HKT, but the potential issues associated with outcome in patients with chronic left ventricular dysfunction speak to the need to consider the potential benefit of this class of therapy. Chronic renal dysfunction is a well-recognized comorbidity in heart failure patients, but until the development of LVADs, heart transplantation was an unlikely outcome. In a survey of almost 20,000 heart transplant recipients reported in the United Network for Organ Sharing database prior to December 2005, only 1.4% received both a heart and a kidney transplant (Arch. Surg. 2009;144:241-6). This was mainly because advanced renal disease has been an exclusion criteria for heart transplantation alone.
The wider application of LVADs as chronic destinations therapy and as a bridge to transplantation has made combined LVAD-D a hitherto ignored option for heart failure patients. In some cases this dual support is a planned therapeutic course. In others, it has become a matter of salvage, when renal failure occurs as a complication of LVAD implantation necessitating acute and chronic dialysis.
According to the UNOS report, prior to 2005, that is, before the wider use of LVADs, 12% of patients receiving HKT were on an LVAD at the time of HKT and 56% were on chronic dialysis. The authors developed a risk score, driven largely by the presence of peripheral vascular disease, age, the use of renal dialysis, and the need for LVAD support. The 1-year survival rate in the 274 patients receiving HKT varied from 93% in the low-risk group to 62% in the high-risk group. The 1-year risk in the high-risk group was four times that of the low-risk group.
HKT can be performed simultaneously or sequentially. Small series from simultaneous single institutions reported an operative mortality of 21% with a 5-year survival of 66% (Am. J. Transpl. 2001;1:89-92).
The benefit of this dual approach to heart failure therapy must be compared to the benefit of each organ transplantation alone. Survival benefit for heart-alone and kidney-alone transplantation now exceeds 10 years. Of concern however is the 3.8% annual mortality rate, a threefold increase since 1995, of HT patients waiting for a kidney transplant.
The relative paucity of both kidney and heart donors demands that the mortality for dual therapy must be measured against that standard. But multi-organ transplantation deprives one needy patient from a precious organ and does little to expand the availability of organ transplantation to a larger population. Nevertheless, the comorbidity of heart failure and renal failure remains a major issue in the management of the chronic heart failure patient and almost certainly will lead to a greater use of LVAD-D and HKT.
The comorbidity of heart failure and kidney failure poses a therapeutic dilemma for both cardiologists and nephrologists and has become a more important problem in managing an aging population. Many of the drugs used to treat heart failure have adverse effects on renal function, and chronic heart failure patients poorly tolerate chronic dialysis.
The development of left ventricular assist devices (LVADs) has expanded the therapeutic options available for the treatment of advanced heart failure, but their use has resulted in many LVAD patients experiencing progressive renal failure. As a result, more patients are going on to combined LVAD and dialysis (LVAD-D) therapy and becoming candidates for combined heart and kidney transplantation (HKT). The creation of this new chronic cardiorenal population poses important logistic and societal challenges.
There is very little information available to estimate the benefit of chronic device support or HKT, but the potential issues associated with outcome in patients with chronic left ventricular dysfunction speak to the need to consider the potential benefit of this class of therapy. Chronic renal dysfunction is a well-recognized comorbidity in heart failure patients, but until the development of LVADs, heart transplantation was an unlikely outcome. In a survey of almost 20,000 heart transplant recipients reported in the United Network for Organ Sharing database prior to December 2005, only 1.4% received both a heart and a kidney transplant (Arch. Surg. 2009;144:241-6). This was mainly because advanced renal disease has been an exclusion criteria for heart transplantation alone.
The wider application of LVADs as chronic destinations therapy and as a bridge to transplantation has made combined LVAD-D a hitherto ignored option for heart failure patients. In some cases this dual support is a planned therapeutic course. In others, it has become a matter of salvage, when renal failure occurs as a complication of LVAD implantation necessitating acute and chronic dialysis.
According to the UNOS report, prior to 2005, that is, before the wider use of LVADs, 12% of patients receiving HKT were on an LVAD at the time of HKT and 56% were on chronic dialysis. The authors developed a risk score, driven largely by the presence of peripheral vascular disease, age, the use of renal dialysis, and the need for LVAD support. The 1-year survival rate in the 274 patients receiving HKT varied from 93% in the low-risk group to 62% in the high-risk group. The 1-year risk in the high-risk group was four times that of the low-risk group.
HKT can be performed simultaneously or sequentially. Small series from simultaneous single institutions reported an operative mortality of 21% with a 5-year survival of 66% (Am. J. Transpl. 2001;1:89-92).
The benefit of this dual approach to heart failure therapy must be compared to the benefit of each organ transplantation alone. Survival benefit for heart-alone and kidney-alone transplantation now exceeds 10 years. Of concern however is the 3.8% annual mortality rate, a threefold increase since 1995, of HT patients waiting for a kidney transplant.
The relative paucity of both kidney and heart donors demands that the mortality for dual therapy must be measured against that standard. But multi-organ transplantation deprives one needy patient from a precious organ and does little to expand the availability of organ transplantation to a larger population. Nevertheless, the comorbidity of heart failure and renal failure remains a major issue in the management of the chronic heart failure patient and almost certainly will lead to a greater use of LVAD-D and HKT.
Changing Heart Failure Mortality
It has become increasingly evident that there has been a significant shift in the long-term mortality of patients admitted to the hospital with heart failure. Discharge from the hospital after an acute event often leads to a revolving door, returning the patients back into the hospital with recurrent symptoms.
There is little question that the current guideline-driven therapy of beta-blockers, ACE inhibitors, and aldosterone receptor blockers have had a significant impact on chronic heart failure therapy. Yet there are almost 1 million first and readmissions annually to U.S. hospitals with the primary diagnosis of heart failure, our nation's most common admitting diagnosis. Despite increased adherence and the success of guideline therapy and inpatient educational efforts, heart failure specialists continue to be faced with an unacceptable early mortality and 60-day readmission rate of 35%.
A recent trended temporal analysis of outcomes of hospitalized patients with heart failure between 1993 and 2006 provides both good and bad news. During that period, in-hospital mortality decreased from 8.5% to 4.3%. But the 30-day postdischarge mortality rate increased from 4.3% to 6.4%. The postdischarge mortality rate during the 30 days post discharge now exceeds the in-patient mortality. During the same 30-day period, the readmission rate increased from 17.2% to 20.1%. Associated with these outcomes, the authors point out that there had been a significant shortening of the length of stay from 8.8 days in 1993 to 6.3 days in 2006 (JAMA 2010;303:2141-7). This shortened hospital stay is largely driven by Medicare reimbursement rates. Their analysis raises important question in regard to the potential effect of the shortening of hospital stay on postdischarge events.
Several studies have examined inpatient care as it affects readmission rates. All of these studies have indicated increase compliance to guideline therapy. However, one rather striking observation has been the failure to achieve weight loss or diuresis during hospitalization. The ADHERE registry points out that 53% of patients admitted with acute congestive heart failure, presumably with volume overload, lose less than 5 pounds, and 20% actually gain weight. It is true that some heart failure may be related to causes other than fluid accumulation, but for the vast majority of patients fluid accumulation represents the primary precipitating event leading to acute heart failure. It is quite possible that the shortened hospital stay leads to premature discharge before adequate diuresis can be achieved. I have found it difficult if not impossible to obtain daily weights in the hospital, and actually urge all of my patients to buy a scale and use it to adjust their diuretic program at home. A novel idea like this would probably not be allowed in the hospital.
A subtle increase in fluid retention associated with increase in pulmonary artery pressure preceding the acute exacerbation of heart failure has been observed in a number of studies using implantable devices. These devices can continuously monitor pulmonary fluid volume and pulmonary artery pressure. Research has been carried out using pulmonary impedance measurements in an attempt to continuously measuring pulmonary fluid. Some of these devices have been incorporated into pacemaker-defibrillator devices, but as yet have not been approved for clinical use. The CHAMPION trial recently reported at the European Heart Failure Society meeting reported that using a totally implantable pulmonary artery pressure sensor in patients with NYHA Class III patients led to improvement in heart failure outcomes. In a small randomized study over a 60-day period of follow-up, symptomatic improvement and the need for rehospitalization was observed.
The issue of early readmission and mortality after acute therapy remains a dilemma facing the hospitalists, internists, and family physicians who treat most of these patients. A careful assessment of the early discharge policies is in order. The mantra of expeditious hospital discharge may be incriminated in the readmission and mortality outcomes. It is also possible that physicians are not aggressive enough with diuretic therapy, both in the hospital and after discharge. Whether an implantable pulmonary artery sensor will replace the bathroom scale remains to be seen. I have observed over time, much to my displeasure, that my bathroom scale never lies.
It has become increasingly evident that there has been a significant shift in the long-term mortality of patients admitted to the hospital with heart failure. Discharge from the hospital after an acute event often leads to a revolving door, returning the patients back into the hospital with recurrent symptoms.
There is little question that the current guideline-driven therapy of beta-blockers, ACE inhibitors, and aldosterone receptor blockers have had a significant impact on chronic heart failure therapy. Yet there are almost 1 million first and readmissions annually to U.S. hospitals with the primary diagnosis of heart failure, our nation's most common admitting diagnosis. Despite increased adherence and the success of guideline therapy and inpatient educational efforts, heart failure specialists continue to be faced with an unacceptable early mortality and 60-day readmission rate of 35%.
A recent trended temporal analysis of outcomes of hospitalized patients with heart failure between 1993 and 2006 provides both good and bad news. During that period, in-hospital mortality decreased from 8.5% to 4.3%. But the 30-day postdischarge mortality rate increased from 4.3% to 6.4%. The postdischarge mortality rate during the 30 days post discharge now exceeds the in-patient mortality. During the same 30-day period, the readmission rate increased from 17.2% to 20.1%. Associated with these outcomes, the authors point out that there had been a significant shortening of the length of stay from 8.8 days in 1993 to 6.3 days in 2006 (JAMA 2010;303:2141-7). This shortened hospital stay is largely driven by Medicare reimbursement rates. Their analysis raises important question in regard to the potential effect of the shortening of hospital stay on postdischarge events.
Several studies have examined inpatient care as it affects readmission rates. All of these studies have indicated increase compliance to guideline therapy. However, one rather striking observation has been the failure to achieve weight loss or diuresis during hospitalization. The ADHERE registry points out that 53% of patients admitted with acute congestive heart failure, presumably with volume overload, lose less than 5 pounds, and 20% actually gain weight. It is true that some heart failure may be related to causes other than fluid accumulation, but for the vast majority of patients fluid accumulation represents the primary precipitating event leading to acute heart failure. It is quite possible that the shortened hospital stay leads to premature discharge before adequate diuresis can be achieved. I have found it difficult if not impossible to obtain daily weights in the hospital, and actually urge all of my patients to buy a scale and use it to adjust their diuretic program at home. A novel idea like this would probably not be allowed in the hospital.
A subtle increase in fluid retention associated with increase in pulmonary artery pressure preceding the acute exacerbation of heart failure has been observed in a number of studies using implantable devices. These devices can continuously monitor pulmonary fluid volume and pulmonary artery pressure. Research has been carried out using pulmonary impedance measurements in an attempt to continuously measuring pulmonary fluid. Some of these devices have been incorporated into pacemaker-defibrillator devices, but as yet have not been approved for clinical use. The CHAMPION trial recently reported at the European Heart Failure Society meeting reported that using a totally implantable pulmonary artery pressure sensor in patients with NYHA Class III patients led to improvement in heart failure outcomes. In a small randomized study over a 60-day period of follow-up, symptomatic improvement and the need for rehospitalization was observed.
The issue of early readmission and mortality after acute therapy remains a dilemma facing the hospitalists, internists, and family physicians who treat most of these patients. A careful assessment of the early discharge policies is in order. The mantra of expeditious hospital discharge may be incriminated in the readmission and mortality outcomes. It is also possible that physicians are not aggressive enough with diuretic therapy, both in the hospital and after discharge. Whether an implantable pulmonary artery sensor will replace the bathroom scale remains to be seen. I have observed over time, much to my displeasure, that my bathroom scale never lies.
It has become increasingly evident that there has been a significant shift in the long-term mortality of patients admitted to the hospital with heart failure. Discharge from the hospital after an acute event often leads to a revolving door, returning the patients back into the hospital with recurrent symptoms.
There is little question that the current guideline-driven therapy of beta-blockers, ACE inhibitors, and aldosterone receptor blockers have had a significant impact on chronic heart failure therapy. Yet there are almost 1 million first and readmissions annually to U.S. hospitals with the primary diagnosis of heart failure, our nation's most common admitting diagnosis. Despite increased adherence and the success of guideline therapy and inpatient educational efforts, heart failure specialists continue to be faced with an unacceptable early mortality and 60-day readmission rate of 35%.
A recent trended temporal analysis of outcomes of hospitalized patients with heart failure between 1993 and 2006 provides both good and bad news. During that period, in-hospital mortality decreased from 8.5% to 4.3%. But the 30-day postdischarge mortality rate increased from 4.3% to 6.4%. The postdischarge mortality rate during the 30 days post discharge now exceeds the in-patient mortality. During the same 30-day period, the readmission rate increased from 17.2% to 20.1%. Associated with these outcomes, the authors point out that there had been a significant shortening of the length of stay from 8.8 days in 1993 to 6.3 days in 2006 (JAMA 2010;303:2141-7). This shortened hospital stay is largely driven by Medicare reimbursement rates. Their analysis raises important question in regard to the potential effect of the shortening of hospital stay on postdischarge events.
Several studies have examined inpatient care as it affects readmission rates. All of these studies have indicated increase compliance to guideline therapy. However, one rather striking observation has been the failure to achieve weight loss or diuresis during hospitalization. The ADHERE registry points out that 53% of patients admitted with acute congestive heart failure, presumably with volume overload, lose less than 5 pounds, and 20% actually gain weight. It is true that some heart failure may be related to causes other than fluid accumulation, but for the vast majority of patients fluid accumulation represents the primary precipitating event leading to acute heart failure. It is quite possible that the shortened hospital stay leads to premature discharge before adequate diuresis can be achieved. I have found it difficult if not impossible to obtain daily weights in the hospital, and actually urge all of my patients to buy a scale and use it to adjust their diuretic program at home. A novel idea like this would probably not be allowed in the hospital.
A subtle increase in fluid retention associated with increase in pulmonary artery pressure preceding the acute exacerbation of heart failure has been observed in a number of studies using implantable devices. These devices can continuously monitor pulmonary fluid volume and pulmonary artery pressure. Research has been carried out using pulmonary impedance measurements in an attempt to continuously measuring pulmonary fluid. Some of these devices have been incorporated into pacemaker-defibrillator devices, but as yet have not been approved for clinical use. The CHAMPION trial recently reported at the European Heart Failure Society meeting reported that using a totally implantable pulmonary artery pressure sensor in patients with NYHA Class III patients led to improvement in heart failure outcomes. In a small randomized study over a 60-day period of follow-up, symptomatic improvement and the need for rehospitalization was observed.
The issue of early readmission and mortality after acute therapy remains a dilemma facing the hospitalists, internists, and family physicians who treat most of these patients. A careful assessment of the early discharge policies is in order. The mantra of expeditious hospital discharge may be incriminated in the readmission and mortality outcomes. It is also possible that physicians are not aggressive enough with diuretic therapy, both in the hospital and after discharge. Whether an implantable pulmonary artery sensor will replace the bathroom scale remains to be seen. I have observed over time, much to my displeasure, that my bathroom scale never lies.
Lessons From Rosiglitazone
As the sun sets on the most recent chapter in the rosiglitazone saga, one may search for a “teaching moment” we can all profit from. Unfortunately, the rosiglitazone experience stands out as a unique example of how not to behave in clinical trials, and how not to introduce a new drug therapy and to maintain its clinical benefit in the eyes of patients and physicians. What is of particular concern is that the story paints a dismal picture of all the players, including industry, clinical scientists, and the federal government.
In the beginning, the Food and Drug Administration did not demand rigorous assessment of cardiovascular outcomes in the studies which approved thiazolidinediones, even though the major expression of diabetes over time is heart disease. The rosiglitazone uproar that began in 2007 led the FDA to require such assessments in 2008, and that error has now been corrected. One could argue that the important changes in the agency's approval process stands as the one positive outcome of this story.
In order to understand the cardiovascular outcomes of one of the TZDs, a meta-analysis—a research tool that is messy and imprecise at best—examined rosiglitazone using the clinical data that were available at that time in regard to the cardiovascular effects of the drug (N. Engl. J. Med. 2007;356:2457-71). Although carried out in a spirit of the search for clinical truths in drug therapy, it gravitated into a confrontation between GlaxoSmithKline (GSK) and the authors of the meta-analysis when significant increases in adverse cardiovascular events were reported. This led to what appeared to many as an overt attempt by GSK to cover up any negative information about the drug. Much of the “backstory” comes from a variety of sources in the press and documents obtained as part of a Senate investigation, which described in a January 2010 report a corporate environment at GSK bent on suppressing any information that could implicate adverse cardiovascular outcomes with the drug and that could impact on its sales of more than $2 billion annually.
The publication of the rosiglitazone meta-analysis led to the premature unblinding of the RECORD (Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycemia in Diabetes) study, which was at the time not quite two-thirds through its planned 6-year test of the cardiovascular effects of rosiglitazone against other non-TZD diabetes drugs. For reasons that appeared to be related more to marketing pressures than clinical knowledge, the medical leadership of the trial and the sponsor GSK agreed to publish the unplanned interim results of RECORD (N. Engl. J. Med. 2007;357:28-38). This seriously compromised the data analysis and the responsibilities of the investigators to the patients in the study. If it had been allowed to continue to conclusion without the interim analysis, it could have provided the needed clinical data to answer some of the concerns about the drug.[At the behest of the FDA, a new 16,000 patient study, Thiazolidinedione Intervention With Vitamin D Evaluation (TIDE) sponsored by GSK, to compare rosiglitazone and pioglitazone to placebo, has begun. The presumption is that patients can be recruited in this trial in the light of the publicity.[eports from the recent meeting of the FDA's Endocrinologic and Metabolic Drugs and Drug Safety and Risk Management Advisory Committees in the lay and professional press.
In that 2-day meeting, which left us exactly where we started, the lack of leadership from the FDA was astounding, with members of its staff providing contradictory information and opinions. And of the 33 member panel made up of cardiologists, endocrinologists, statisticians, and patient representatives, 12 voted to have rosiglitazone withdrawn, 17 to restrict the use of the drug with an increased patient and physician warning, 3 to leave it unchanged, and 1 abstained. It seems to this observer that at times some of the committee members were more concerned about the welfare of GSK than the safety of diabetes patients.
One could make a case for the continued use of rosiglitazone to treat diabetes if it were the only option, but with the plethora of other drugs available it does not seem to be in the patients' best interest to continue to use a drug that is potentially unsafe. Even more dubious is the argument for the continuation of the TIDE trial, which turns the drug approval process upside down and seems to be little more than a marketing effort. It is hard to imagine that patients will agree to participate in a randomized trial given the FDA advisory committee coverage and the potential risks of rosiglitazone expressed in it.
At a time when the clinical trials industry (and it has become an industry) is at the threshold of testing new complex clinical strategies for previously untreated conditions like Alzheimer's disease, we should be able to manage our research with intelligence and a sense of the primacy of our responsibility to patient care. It is critical to create an environment in which both patients and physicians believe that we are living up to those goals. The rosiglitazone saga leaves us far short of meeting that challenge.
As the sun sets on the most recent chapter in the rosiglitazone saga, one may search for a “teaching moment” we can all profit from. Unfortunately, the rosiglitazone experience stands out as a unique example of how not to behave in clinical trials, and how not to introduce a new drug therapy and to maintain its clinical benefit in the eyes of patients and physicians. What is of particular concern is that the story paints a dismal picture of all the players, including industry, clinical scientists, and the federal government.
In the beginning, the Food and Drug Administration did not demand rigorous assessment of cardiovascular outcomes in the studies which approved thiazolidinediones, even though the major expression of diabetes over time is heart disease. The rosiglitazone uproar that began in 2007 led the FDA to require such assessments in 2008, and that error has now been corrected. One could argue that the important changes in the agency's approval process stands as the one positive outcome of this story.
In order to understand the cardiovascular outcomes of one of the TZDs, a meta-analysis—a research tool that is messy and imprecise at best—examined rosiglitazone using the clinical data that were available at that time in regard to the cardiovascular effects of the drug (N. Engl. J. Med. 2007;356:2457-71). Although carried out in a spirit of the search for clinical truths in drug therapy, it gravitated into a confrontation between GlaxoSmithKline (GSK) and the authors of the meta-analysis when significant increases in adverse cardiovascular events were reported. This led to what appeared to many as an overt attempt by GSK to cover up any negative information about the drug. Much of the “backstory” comes from a variety of sources in the press and documents obtained as part of a Senate investigation, which described in a January 2010 report a corporate environment at GSK bent on suppressing any information that could implicate adverse cardiovascular outcomes with the drug and that could impact on its sales of more than $2 billion annually.
The publication of the rosiglitazone meta-analysis led to the premature unblinding of the RECORD (Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycemia in Diabetes) study, which was at the time not quite two-thirds through its planned 6-year test of the cardiovascular effects of rosiglitazone against other non-TZD diabetes drugs. For reasons that appeared to be related more to marketing pressures than clinical knowledge, the medical leadership of the trial and the sponsor GSK agreed to publish the unplanned interim results of RECORD (N. Engl. J. Med. 2007;357:28-38). This seriously compromised the data analysis and the responsibilities of the investigators to the patients in the study. If it had been allowed to continue to conclusion without the interim analysis, it could have provided the needed clinical data to answer some of the concerns about the drug.[At the behest of the FDA, a new 16,000 patient study, Thiazolidinedione Intervention With Vitamin D Evaluation (TIDE) sponsored by GSK, to compare rosiglitazone and pioglitazone to placebo, has begun. The presumption is that patients can be recruited in this trial in the light of the publicity.[eports from the recent meeting of the FDA's Endocrinologic and Metabolic Drugs and Drug Safety and Risk Management Advisory Committees in the lay and professional press.
In that 2-day meeting, which left us exactly where we started, the lack of leadership from the FDA was astounding, with members of its staff providing contradictory information and opinions. And of the 33 member panel made up of cardiologists, endocrinologists, statisticians, and patient representatives, 12 voted to have rosiglitazone withdrawn, 17 to restrict the use of the drug with an increased patient and physician warning, 3 to leave it unchanged, and 1 abstained. It seems to this observer that at times some of the committee members were more concerned about the welfare of GSK than the safety of diabetes patients.
One could make a case for the continued use of rosiglitazone to treat diabetes if it were the only option, but with the plethora of other drugs available it does not seem to be in the patients' best interest to continue to use a drug that is potentially unsafe. Even more dubious is the argument for the continuation of the TIDE trial, which turns the drug approval process upside down and seems to be little more than a marketing effort. It is hard to imagine that patients will agree to participate in a randomized trial given the FDA advisory committee coverage and the potential risks of rosiglitazone expressed in it.
At a time when the clinical trials industry (and it has become an industry) is at the threshold of testing new complex clinical strategies for previously untreated conditions like Alzheimer's disease, we should be able to manage our research with intelligence and a sense of the primacy of our responsibility to patient care. It is critical to create an environment in which both patients and physicians believe that we are living up to those goals. The rosiglitazone saga leaves us far short of meeting that challenge.
As the sun sets on the most recent chapter in the rosiglitazone saga, one may search for a “teaching moment” we can all profit from. Unfortunately, the rosiglitazone experience stands out as a unique example of how not to behave in clinical trials, and how not to introduce a new drug therapy and to maintain its clinical benefit in the eyes of patients and physicians. What is of particular concern is that the story paints a dismal picture of all the players, including industry, clinical scientists, and the federal government.
In the beginning, the Food and Drug Administration did not demand rigorous assessment of cardiovascular outcomes in the studies which approved thiazolidinediones, even though the major expression of diabetes over time is heart disease. The rosiglitazone uproar that began in 2007 led the FDA to require such assessments in 2008, and that error has now been corrected. One could argue that the important changes in the agency's approval process stands as the one positive outcome of this story.
In order to understand the cardiovascular outcomes of one of the TZDs, a meta-analysis—a research tool that is messy and imprecise at best—examined rosiglitazone using the clinical data that were available at that time in regard to the cardiovascular effects of the drug (N. Engl. J. Med. 2007;356:2457-71). Although carried out in a spirit of the search for clinical truths in drug therapy, it gravitated into a confrontation between GlaxoSmithKline (GSK) and the authors of the meta-analysis when significant increases in adverse cardiovascular events were reported. This led to what appeared to many as an overt attempt by GSK to cover up any negative information about the drug. Much of the “backstory” comes from a variety of sources in the press and documents obtained as part of a Senate investigation, which described in a January 2010 report a corporate environment at GSK bent on suppressing any information that could implicate adverse cardiovascular outcomes with the drug and that could impact on its sales of more than $2 billion annually.
The publication of the rosiglitazone meta-analysis led to the premature unblinding of the RECORD (Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycemia in Diabetes) study, which was at the time not quite two-thirds through its planned 6-year test of the cardiovascular effects of rosiglitazone against other non-TZD diabetes drugs. For reasons that appeared to be related more to marketing pressures than clinical knowledge, the medical leadership of the trial and the sponsor GSK agreed to publish the unplanned interim results of RECORD (N. Engl. J. Med. 2007;357:28-38). This seriously compromised the data analysis and the responsibilities of the investigators to the patients in the study. If it had been allowed to continue to conclusion without the interim analysis, it could have provided the needed clinical data to answer some of the concerns about the drug.[At the behest of the FDA, a new 16,000 patient study, Thiazolidinedione Intervention With Vitamin D Evaluation (TIDE) sponsored by GSK, to compare rosiglitazone and pioglitazone to placebo, has begun. The presumption is that patients can be recruited in this trial in the light of the publicity.[eports from the recent meeting of the FDA's Endocrinologic and Metabolic Drugs and Drug Safety and Risk Management Advisory Committees in the lay and professional press.
In that 2-day meeting, which left us exactly where we started, the lack of leadership from the FDA was astounding, with members of its staff providing contradictory information and opinions. And of the 33 member panel made up of cardiologists, endocrinologists, statisticians, and patient representatives, 12 voted to have rosiglitazone withdrawn, 17 to restrict the use of the drug with an increased patient and physician warning, 3 to leave it unchanged, and 1 abstained. It seems to this observer that at times some of the committee members were more concerned about the welfare of GSK than the safety of diabetes patients.
One could make a case for the continued use of rosiglitazone to treat diabetes if it were the only option, but with the plethora of other drugs available it does not seem to be in the patients' best interest to continue to use a drug that is potentially unsafe. Even more dubious is the argument for the continuation of the TIDE trial, which turns the drug approval process upside down and seems to be little more than a marketing effort. It is hard to imagine that patients will agree to participate in a randomized trial given the FDA advisory committee coverage and the potential risks of rosiglitazone expressed in it.
At a time when the clinical trials industry (and it has become an industry) is at the threshold of testing new complex clinical strategies for previously untreated conditions like Alzheimer's disease, we should be able to manage our research with intelligence and a sense of the primacy of our responsibility to patient care. It is critical to create an environment in which both patients and physicians believe that we are living up to those goals. The rosiglitazone saga leaves us far short of meeting that challenge.
From Cottage Industry to Corporate Medicine
American Medicine has been in transition since the mid-20th century and is about to change yet again into a new model.
It has shifted from a cottage industry comprised of myriad private offices to a corporate model dominated by hospitals and the insurance industry and funded in large part by the federal government.
The cottage industry model had as its philosophic foundation the importance and preservation of the physician's financial and medical independence in dealing with patients. Over time, the transition from physician-owned private practice to multispecialty physician–owned clinics became a natural outgrowth of the complexity of modern medical care. The technological developments in cardiology made a close relationship between hospitals and cardiologists a clinical if not economic necessity.
Beginning in the mid-20th century, the American hospital changed from a place where the private physicians could treat pneumonia and remove gallbladders to the current destination of critically ill patients cared for by salaried hospital physicians. The growth of the American hospital can be traced to the huge expenditures by the federal government in the post–World War II years. Since then, hospitals have continued to grow and have become the dominant player in the medical structure of the community.
As health economics changed, however, the need and desire to control the community medical practice patterns led to a variety of financial arrangements between hospitals and physicians, most of which linked the practitioners closer to the hospitals. The shift has occurred as more young physicians—facing major training debt and a reluctance to take on the paperwork required of health insurance compliance—see health care organizations managed by hospitals as a means to a better lifestyle and financial approach to practicing medicine.
The insurance industry's involvement with health care started in 1933, when insurance companies began selling prepaid hospital plans. They were soon consolidated into Blue Cross, which provided depression-era hospitals with needed income and stability. Physicians later reluctantly signed on to Blue Shield in 1944 with the proviso that they would control the plan. The next step was Medicare and Medicaid created in the environment of angry protests from both the American Hospital Association and the American Medical Association during the Johnson administration in 1965 (“The Social Transformation of American Medicine” by Paul Starr [Basic Books, 1982]).
And now we have president Obama's health care legislation, which portends a further evolution of the relationship between hospitals and practicing physicians, particularly cardiologists.
Even before the new health care legislation was passed, the balance between physician-owned and hospital-based practices had undergone major changes. Between 2005 and 2008, the number of physician-owned practices decreased from 70% to less than 50%. The American College of Cardiology estimates that there has been a 50% decrease in private practice in the last year as cardiologists migrated to hospital practices.
For the private cardiologists who own their own clinics, the recent decrease in Medicare reimbursement rates for imaging tests has been the death knell and has forced many to merge their practices with hospitals. The charges for cardiac imaging, which provided much of the financial support for the private physician–owned offices, was the first target of health care planners aimed at decreasing costs by limiting the presumed overuse of outpatient testing. As a result, it decreased testing reimbursement by 27%–40% and accelerated the migration of cardiologists to hospitals.
Paradoxically, if there is no change in utilization, Medicare will end up paying twice as much for a nuclear or echo study as a result of the cost shift from doctor's office fee to hospital reimbursement, according the ACC.
The new model of health care, provided by hospitals and supported by the insurance industry, has now become dominant in many communities. Because of their size and ability to control local practice standards, these collaboratives tend to overwhelm their competition. In Massachusetts, where the new model of health care is playing out, major conflicts have already occurred between hospital-based insurance plans such as Partners HeathCare System, perceived as high-cost providers, and low-cost plans. The Department of Justice is investigating Partners for possible anticompetitive behavior. Although this is not on the scale of Goldman Sachs' misadventures, it is worth noting as we move through the new health care paradigm.
Slowly but surely the American doctor is being incorporated into hospital-insurance alliances supported in a large part by the federal government and private insurers. This may not be all bad, and probably not news to most of you, but it is worth considering how we arrived at this moment in history.
American Medicine has been in transition since the mid-20th century and is about to change yet again into a new model.
It has shifted from a cottage industry comprised of myriad private offices to a corporate model dominated by hospitals and the insurance industry and funded in large part by the federal government.
The cottage industry model had as its philosophic foundation the importance and preservation of the physician's financial and medical independence in dealing with patients. Over time, the transition from physician-owned private practice to multispecialty physician–owned clinics became a natural outgrowth of the complexity of modern medical care. The technological developments in cardiology made a close relationship between hospitals and cardiologists a clinical if not economic necessity.
Beginning in the mid-20th century, the American hospital changed from a place where the private physicians could treat pneumonia and remove gallbladders to the current destination of critically ill patients cared for by salaried hospital physicians. The growth of the American hospital can be traced to the huge expenditures by the federal government in the post–World War II years. Since then, hospitals have continued to grow and have become the dominant player in the medical structure of the community.
As health economics changed, however, the need and desire to control the community medical practice patterns led to a variety of financial arrangements between hospitals and physicians, most of which linked the practitioners closer to the hospitals. The shift has occurred as more young physicians—facing major training debt and a reluctance to take on the paperwork required of health insurance compliance—see health care organizations managed by hospitals as a means to a better lifestyle and financial approach to practicing medicine.
The insurance industry's involvement with health care started in 1933, when insurance companies began selling prepaid hospital plans. They were soon consolidated into Blue Cross, which provided depression-era hospitals with needed income and stability. Physicians later reluctantly signed on to Blue Shield in 1944 with the proviso that they would control the plan. The next step was Medicare and Medicaid created in the environment of angry protests from both the American Hospital Association and the American Medical Association during the Johnson administration in 1965 (“The Social Transformation of American Medicine” by Paul Starr [Basic Books, 1982]).
And now we have president Obama's health care legislation, which portends a further evolution of the relationship between hospitals and practicing physicians, particularly cardiologists.
Even before the new health care legislation was passed, the balance between physician-owned and hospital-based practices had undergone major changes. Between 2005 and 2008, the number of physician-owned practices decreased from 70% to less than 50%. The American College of Cardiology estimates that there has been a 50% decrease in private practice in the last year as cardiologists migrated to hospital practices.
For the private cardiologists who own their own clinics, the recent decrease in Medicare reimbursement rates for imaging tests has been the death knell and has forced many to merge their practices with hospitals. The charges for cardiac imaging, which provided much of the financial support for the private physician–owned offices, was the first target of health care planners aimed at decreasing costs by limiting the presumed overuse of outpatient testing. As a result, it decreased testing reimbursement by 27%–40% and accelerated the migration of cardiologists to hospitals.
Paradoxically, if there is no change in utilization, Medicare will end up paying twice as much for a nuclear or echo study as a result of the cost shift from doctor's office fee to hospital reimbursement, according the ACC.
The new model of health care, provided by hospitals and supported by the insurance industry, has now become dominant in many communities. Because of their size and ability to control local practice standards, these collaboratives tend to overwhelm their competition. In Massachusetts, where the new model of health care is playing out, major conflicts have already occurred between hospital-based insurance plans such as Partners HeathCare System, perceived as high-cost providers, and low-cost plans. The Department of Justice is investigating Partners for possible anticompetitive behavior. Although this is not on the scale of Goldman Sachs' misadventures, it is worth noting as we move through the new health care paradigm.
Slowly but surely the American doctor is being incorporated into hospital-insurance alliances supported in a large part by the federal government and private insurers. This may not be all bad, and probably not news to most of you, but it is worth considering how we arrived at this moment in history.
American Medicine has been in transition since the mid-20th century and is about to change yet again into a new model.
It has shifted from a cottage industry comprised of myriad private offices to a corporate model dominated by hospitals and the insurance industry and funded in large part by the federal government.
The cottage industry model had as its philosophic foundation the importance and preservation of the physician's financial and medical independence in dealing with patients. Over time, the transition from physician-owned private practice to multispecialty physician–owned clinics became a natural outgrowth of the complexity of modern medical care. The technological developments in cardiology made a close relationship between hospitals and cardiologists a clinical if not economic necessity.
Beginning in the mid-20th century, the American hospital changed from a place where the private physicians could treat pneumonia and remove gallbladders to the current destination of critically ill patients cared for by salaried hospital physicians. The growth of the American hospital can be traced to the huge expenditures by the federal government in the post–World War II years. Since then, hospitals have continued to grow and have become the dominant player in the medical structure of the community.
As health economics changed, however, the need and desire to control the community medical practice patterns led to a variety of financial arrangements between hospitals and physicians, most of which linked the practitioners closer to the hospitals. The shift has occurred as more young physicians—facing major training debt and a reluctance to take on the paperwork required of health insurance compliance—see health care organizations managed by hospitals as a means to a better lifestyle and financial approach to practicing medicine.
The insurance industry's involvement with health care started in 1933, when insurance companies began selling prepaid hospital plans. They were soon consolidated into Blue Cross, which provided depression-era hospitals with needed income and stability. Physicians later reluctantly signed on to Blue Shield in 1944 with the proviso that they would control the plan. The next step was Medicare and Medicaid created in the environment of angry protests from both the American Hospital Association and the American Medical Association during the Johnson administration in 1965 (“The Social Transformation of American Medicine” by Paul Starr [Basic Books, 1982]).
And now we have president Obama's health care legislation, which portends a further evolution of the relationship between hospitals and practicing physicians, particularly cardiologists.
Even before the new health care legislation was passed, the balance between physician-owned and hospital-based practices had undergone major changes. Between 2005 and 2008, the number of physician-owned practices decreased from 70% to less than 50%. The American College of Cardiology estimates that there has been a 50% decrease in private practice in the last year as cardiologists migrated to hospital practices.
For the private cardiologists who own their own clinics, the recent decrease in Medicare reimbursement rates for imaging tests has been the death knell and has forced many to merge their practices with hospitals. The charges for cardiac imaging, which provided much of the financial support for the private physician–owned offices, was the first target of health care planners aimed at decreasing costs by limiting the presumed overuse of outpatient testing. As a result, it decreased testing reimbursement by 27%–40% and accelerated the migration of cardiologists to hospitals.
Paradoxically, if there is no change in utilization, Medicare will end up paying twice as much for a nuclear or echo study as a result of the cost shift from doctor's office fee to hospital reimbursement, according the ACC.
The new model of health care, provided by hospitals and supported by the insurance industry, has now become dominant in many communities. Because of their size and ability to control local practice standards, these collaboratives tend to overwhelm their competition. In Massachusetts, where the new model of health care is playing out, major conflicts have already occurred between hospital-based insurance plans such as Partners HeathCare System, perceived as high-cost providers, and low-cost plans. The Department of Justice is investigating Partners for possible anticompetitive behavior. Although this is not on the scale of Goldman Sachs' misadventures, it is worth noting as we move through the new health care paradigm.
Slowly but surely the American doctor is being incorporated into hospital-insurance alliances supported in a large part by the federal government and private insurers. This may not be all bad, and probably not news to most of you, but it is worth considering how we arrived at this moment in history.
Lipid Target Practice
Ever since Dr. Joseph L. Goldstein and Dr. Michael S. Brown established the foundation of the cholesterol hypothesis, the medical community has taken aim at lowering serum cholesterol in men and women throughout the world. The initial attempts to lower cholesterol with diet, exercise, and occasionally surgery met with only marginal success.
Not until the introduction of statin therapy to our therapeutic armamentarium did we achieve measurable success in both lowering cholesterol and an associated decrease in atherosclerotic cardiovascular disease mortality and morbidity.
Our success in lowering cholesterol has been measured by a number of international epidemiology studies, the first of which was performed in 1996-1997, the Lipid Treatment Assessment Project (L-TAP) (Arch. Intern. Med. 2000;160:459-67).
The most recent study, L-TAP 2, was an international survey carried out in more than 10,000 patients in nine countries between 2006 and 2007 (Circulation 2009;120:28-34) and catalogues the profound decrease in cholesterol lowering that has been achieved during the last 10 years.
The result of L-TAP 2 points out the significant success that has been achieved during that period in lowering serum LDL cholesterol and raising HDL cholesterol. Successful cholesterol-lowering to the country-specific levels was achieved in 73% of all patients and 67% of high-risk patients. Most of the success was achieved in patients with low to moderate risk.
Comparable data in the earlier L-TAP study reported successful lowering in only 38% and 18% respectively. Dr. Antonio Gotto, in an accompanying editorial, suggests this success was likely due to “the introduction of more effective lipid-lowering therapies” rather than improved patient compliance or physician awareness.
Unfortunately, the very-high-risk patients, those with coronary artery disease and at least two major risk factors, remain a serious problem. Successful lowering of serum cholesterol to the target of below 70 mg/dL was reached in only 30% of the very-high-risk patients. Because of the delayed introduction of the higher potency drugs atorvastatin and rosuvastatin, their use was limited to approximately one-half of the patients in L-TAP 2.
It is possible that the more widespread introduction of these drugs or even more potent drugs in the future will result in further cholesterol lowering in the very-high-risk patients who are still undertreated yet have reached therapeutic goal. The strong association of hypertension, obesity, and diabetes in the high-risk group emphasizes the importance of a multidimensional therapeutic approach to the high-risk population.
The minimal target for cholesterol treatment is yet to be determined, but the Treating to New Targets (TNT) study comparing high- and low-dose atorvastatin, indicated that treatment to an LDL cholesterol level of 77 mg/dL, compared with 101 mg/dL, was associated with a 22% in the reduction of the risk of a first major cardiovascular event (J. Am. Coll. Cardiol. 2006;48:1793-9).
The authors noted that their study was limited by the uncertainty about the nature of the patients and participating physicians. This uncertainty provides an important message in light of our current health care debate. It can be presumed that patients in L-TAP 2 are unique and hardly representative of this country's population as whole. This is important to keep in mind as we search for better prevention of cardiovascular disease in America. It is fair to assume that the nearly 50 million Americans without health insurance would not have been among the patients included in L-TAP 2 and are probably outside of the cholesterol prevention programs. Their cholesterol levels do not appear on the radar screen.
A recent study suggests that adherence to current cholesterol guidelines could prevent 20,000 myocardial infarctions and 10,000 deaths annually (Ann. Intern. Med. 2009;150:243-54).
Our ability to provide quality cardiovascular care is seriously limited by the economic barriers to access to care both acute and preventative. To successfully deal with our national epidemic of cardiovascular disease we need to mitigate those economic barriers and improve the accessibility to health care to all Americans.
Ever since Dr. Joseph L. Goldstein and Dr. Michael S. Brown established the foundation of the cholesterol hypothesis, the medical community has taken aim at lowering serum cholesterol in men and women throughout the world. The initial attempts to lower cholesterol with diet, exercise, and occasionally surgery met with only marginal success.
Not until the introduction of statin therapy to our therapeutic armamentarium did we achieve measurable success in both lowering cholesterol and an associated decrease in atherosclerotic cardiovascular disease mortality and morbidity.
Our success in lowering cholesterol has been measured by a number of international epidemiology studies, the first of which was performed in 1996-1997, the Lipid Treatment Assessment Project (L-TAP) (Arch. Intern. Med. 2000;160:459-67).
The most recent study, L-TAP 2, was an international survey carried out in more than 10,000 patients in nine countries between 2006 and 2007 (Circulation 2009;120:28-34) and catalogues the profound decrease in cholesterol lowering that has been achieved during the last 10 years.
The result of L-TAP 2 points out the significant success that has been achieved during that period in lowering serum LDL cholesterol and raising HDL cholesterol. Successful cholesterol-lowering to the country-specific levels was achieved in 73% of all patients and 67% of high-risk patients. Most of the success was achieved in patients with low to moderate risk.
Comparable data in the earlier L-TAP study reported successful lowering in only 38% and 18% respectively. Dr. Antonio Gotto, in an accompanying editorial, suggests this success was likely due to “the introduction of more effective lipid-lowering therapies” rather than improved patient compliance or physician awareness.
Unfortunately, the very-high-risk patients, those with coronary artery disease and at least two major risk factors, remain a serious problem. Successful lowering of serum cholesterol to the target of below 70 mg/dL was reached in only 30% of the very-high-risk patients. Because of the delayed introduction of the higher potency drugs atorvastatin and rosuvastatin, their use was limited to approximately one-half of the patients in L-TAP 2.
It is possible that the more widespread introduction of these drugs or even more potent drugs in the future will result in further cholesterol lowering in the very-high-risk patients who are still undertreated yet have reached therapeutic goal. The strong association of hypertension, obesity, and diabetes in the high-risk group emphasizes the importance of a multidimensional therapeutic approach to the high-risk population.
The minimal target for cholesterol treatment is yet to be determined, but the Treating to New Targets (TNT) study comparing high- and low-dose atorvastatin, indicated that treatment to an LDL cholesterol level of 77 mg/dL, compared with 101 mg/dL, was associated with a 22% in the reduction of the risk of a first major cardiovascular event (J. Am. Coll. Cardiol. 2006;48:1793-9).
The authors noted that their study was limited by the uncertainty about the nature of the patients and participating physicians. This uncertainty provides an important message in light of our current health care debate. It can be presumed that patients in L-TAP 2 are unique and hardly representative of this country's population as whole. This is important to keep in mind as we search for better prevention of cardiovascular disease in America. It is fair to assume that the nearly 50 million Americans without health insurance would not have been among the patients included in L-TAP 2 and are probably outside of the cholesterol prevention programs. Their cholesterol levels do not appear on the radar screen.
A recent study suggests that adherence to current cholesterol guidelines could prevent 20,000 myocardial infarctions and 10,000 deaths annually (Ann. Intern. Med. 2009;150:243-54).
Our ability to provide quality cardiovascular care is seriously limited by the economic barriers to access to care both acute and preventative. To successfully deal with our national epidemic of cardiovascular disease we need to mitigate those economic barriers and improve the accessibility to health care to all Americans.
Ever since Dr. Joseph L. Goldstein and Dr. Michael S. Brown established the foundation of the cholesterol hypothesis, the medical community has taken aim at lowering serum cholesterol in men and women throughout the world. The initial attempts to lower cholesterol with diet, exercise, and occasionally surgery met with only marginal success.
Not until the introduction of statin therapy to our therapeutic armamentarium did we achieve measurable success in both lowering cholesterol and an associated decrease in atherosclerotic cardiovascular disease mortality and morbidity.
Our success in lowering cholesterol has been measured by a number of international epidemiology studies, the first of which was performed in 1996-1997, the Lipid Treatment Assessment Project (L-TAP) (Arch. Intern. Med. 2000;160:459-67).
The most recent study, L-TAP 2, was an international survey carried out in more than 10,000 patients in nine countries between 2006 and 2007 (Circulation 2009;120:28-34) and catalogues the profound decrease in cholesterol lowering that has been achieved during the last 10 years.
The result of L-TAP 2 points out the significant success that has been achieved during that period in lowering serum LDL cholesterol and raising HDL cholesterol. Successful cholesterol-lowering to the country-specific levels was achieved in 73% of all patients and 67% of high-risk patients. Most of the success was achieved in patients with low to moderate risk.
Comparable data in the earlier L-TAP study reported successful lowering in only 38% and 18% respectively. Dr. Antonio Gotto, in an accompanying editorial, suggests this success was likely due to “the introduction of more effective lipid-lowering therapies” rather than improved patient compliance or physician awareness.
Unfortunately, the very-high-risk patients, those with coronary artery disease and at least two major risk factors, remain a serious problem. Successful lowering of serum cholesterol to the target of below 70 mg/dL was reached in only 30% of the very-high-risk patients. Because of the delayed introduction of the higher potency drugs atorvastatin and rosuvastatin, their use was limited to approximately one-half of the patients in L-TAP 2.
It is possible that the more widespread introduction of these drugs or even more potent drugs in the future will result in further cholesterol lowering in the very-high-risk patients who are still undertreated yet have reached therapeutic goal. The strong association of hypertension, obesity, and diabetes in the high-risk group emphasizes the importance of a multidimensional therapeutic approach to the high-risk population.
The minimal target for cholesterol treatment is yet to be determined, but the Treating to New Targets (TNT) study comparing high- and low-dose atorvastatin, indicated that treatment to an LDL cholesterol level of 77 mg/dL, compared with 101 mg/dL, was associated with a 22% in the reduction of the risk of a first major cardiovascular event (J. Am. Coll. Cardiol. 2006;48:1793-9).
The authors noted that their study was limited by the uncertainty about the nature of the patients and participating physicians. This uncertainty provides an important message in light of our current health care debate. It can be presumed that patients in L-TAP 2 are unique and hardly representative of this country's population as whole. This is important to keep in mind as we search for better prevention of cardiovascular disease in America. It is fair to assume that the nearly 50 million Americans without health insurance would not have been among the patients included in L-TAP 2 and are probably outside of the cholesterol prevention programs. Their cholesterol levels do not appear on the radar screen.
A recent study suggests that adherence to current cholesterol guidelines could prevent 20,000 myocardial infarctions and 10,000 deaths annually (Ann. Intern. Med. 2009;150:243-54).
Our ability to provide quality cardiovascular care is seriously limited by the economic barriers to access to care both acute and preventative. To successfully deal with our national epidemic of cardiovascular disease we need to mitigate those economic barriers and improve the accessibility to health care to all Americans.
PCI and CABG: Use and Abuse
The difficulty in incorporating guidelines into clinical practice is nowhere more evident than in the decisions made based on coronary angiographic images. The controversy has raged from the minute Dr. F. Mason Sones Jr. first directly imaged the left coronary artery more than 40 years ago, and it has been compounded by the evolution of technological advances in both the angiographic laboratory and the operating room.
The coronary angiographers are the major players in determining which revascularization path to take—percutaneous coronary intervention (PCI) or coronary artery bypass graft surgery (CABG)—based on their diagnostic findings. They are forced to make the appropriate decision, based not only on the coronary anatomy, but also on the expertise of their surgical colleagues, the patient's choice and clinical status, and, in large part, the perceptions of their own clinical skills. More recently, their decisions are made under pressure from state and federal supervision, insurers, and their own hospital administrators who often have divergent attitudes toward clinical volumes and costs. Not an easy place to sit when all you wanted to do was to treat heart patients.
The recent publication of information from New York State's cardiac diagnostic catheterization database (Circulation 2010;121:267-75) provides some interesting insight into that decision-making process. The authors reported on 16,142 patients catheterized in 19 hospitals during 2005-2007. Catheterization laboratory cardiologists provided interventional recommendations for 10,333 (64%) of these patients. Study subjects ran the spectrum from asymptomatic angina to non–ST-elevation myocardial infarction. Their recommendations were compared with those of the ACC/AHA guidelines and based solely on angiographic findings. Among the 1,337 patients who had indications for CABG, 712 (53%) were recommended CABG and 455 (34%) were recommended for PCI by the angiographer. Among the 6,051 patients with indications for PCI, 5,660 (94%) were recommended for PCI. In the 1,223 patients in whom no intervention was recommended, 261 (21%) received PCI and 70 (6%) underwent CABG. To no one's surprise, there was a strong bias in the direction of PCI.
In an excellent editorial accompanying the report, Dr. Raymond J. Gibbons of the Mayo Clinic in Rochester, Minn., thoughtfully placed these data in the milieu of the contemporary issues surrounding the use and abuse of coronary angiography and interventions (Circulation 2010;121:194-6). He noted the observed bias toward PCI in the analysis, which is to be expected since there is a “tendency for us to believe in what we do.” Considering the data in general, in the closely monitored environment of New York State, the evidence of abuse or overuse was limited to the 27% of patients who went on to PCI or CABG outside of the guidelines. In view of the fact that the analysis did not consider the medical history and concurrent therapy of patients, overuse of interventions appeared to be limited.
Of more concern to Gibbons and this editor is the question of the regional variation in the use of both diagnostic angiography and vascular interventions. In New York State, the performance rate of PCI in different regional health care markets varied between 6.2 and 13.0 interventions per 1,000 Medicare beneficiaries. The New York State rate was similar to that in Rochester, Minn., and Cleveland However, the highest regional PCI rate in New York State was lower than 69 of the 305 health care markets in the United States. Similar variation was observed in the use of CABG, where the highest rate in New York was less than half the rate observed in McAllen, Tex. These wide variations bespeak the potential for decision making that is well outside guideline recommendations. We have expressed in this column that these are “only” guidelines. However, it behooves all of those who are straying that far outside of the guideline recommendations to be certain of the appropriateness of our decisions.
Most of us are not as much under the microscope as our colleagues in New York State. But as the Centers for Medicare and Medicaid Services agency intrudes more into our practice, the microscope likely will be trained on all of us. Finding the best answers to clinical care is not easy. We all become driven by our own personal experiences, but it is helpful to temper our experiences with those of our colleagues.
The difficulty in incorporating guidelines into clinical practice is nowhere more evident than in the decisions made based on coronary angiographic images. The controversy has raged from the minute Dr. F. Mason Sones Jr. first directly imaged the left coronary artery more than 40 years ago, and it has been compounded by the evolution of technological advances in both the angiographic laboratory and the operating room.
The coronary angiographers are the major players in determining which revascularization path to take—percutaneous coronary intervention (PCI) or coronary artery bypass graft surgery (CABG)—based on their diagnostic findings. They are forced to make the appropriate decision, based not only on the coronary anatomy, but also on the expertise of their surgical colleagues, the patient's choice and clinical status, and, in large part, the perceptions of their own clinical skills. More recently, their decisions are made under pressure from state and federal supervision, insurers, and their own hospital administrators who often have divergent attitudes toward clinical volumes and costs. Not an easy place to sit when all you wanted to do was to treat heart patients.
The recent publication of information from New York State's cardiac diagnostic catheterization database (Circulation 2010;121:267-75) provides some interesting insight into that decision-making process. The authors reported on 16,142 patients catheterized in 19 hospitals during 2005-2007. Catheterization laboratory cardiologists provided interventional recommendations for 10,333 (64%) of these patients. Study subjects ran the spectrum from asymptomatic angina to non–ST-elevation myocardial infarction. Their recommendations were compared with those of the ACC/AHA guidelines and based solely on angiographic findings. Among the 1,337 patients who had indications for CABG, 712 (53%) were recommended CABG and 455 (34%) were recommended for PCI by the angiographer. Among the 6,051 patients with indications for PCI, 5,660 (94%) were recommended for PCI. In the 1,223 patients in whom no intervention was recommended, 261 (21%) received PCI and 70 (6%) underwent CABG. To no one's surprise, there was a strong bias in the direction of PCI.
In an excellent editorial accompanying the report, Dr. Raymond J. Gibbons of the Mayo Clinic in Rochester, Minn., thoughtfully placed these data in the milieu of the contemporary issues surrounding the use and abuse of coronary angiography and interventions (Circulation 2010;121:194-6). He noted the observed bias toward PCI in the analysis, which is to be expected since there is a “tendency for us to believe in what we do.” Considering the data in general, in the closely monitored environment of New York State, the evidence of abuse or overuse was limited to the 27% of patients who went on to PCI or CABG outside of the guidelines. In view of the fact that the analysis did not consider the medical history and concurrent therapy of patients, overuse of interventions appeared to be limited.
Of more concern to Gibbons and this editor is the question of the regional variation in the use of both diagnostic angiography and vascular interventions. In New York State, the performance rate of PCI in different regional health care markets varied between 6.2 and 13.0 interventions per 1,000 Medicare beneficiaries. The New York State rate was similar to that in Rochester, Minn., and Cleveland However, the highest regional PCI rate in New York State was lower than 69 of the 305 health care markets in the United States. Similar variation was observed in the use of CABG, where the highest rate in New York was less than half the rate observed in McAllen, Tex. These wide variations bespeak the potential for decision making that is well outside guideline recommendations. We have expressed in this column that these are “only” guidelines. However, it behooves all of those who are straying that far outside of the guideline recommendations to be certain of the appropriateness of our decisions.
Most of us are not as much under the microscope as our colleagues in New York State. But as the Centers for Medicare and Medicaid Services agency intrudes more into our practice, the microscope likely will be trained on all of us. Finding the best answers to clinical care is not easy. We all become driven by our own personal experiences, but it is helpful to temper our experiences with those of our colleagues.
The difficulty in incorporating guidelines into clinical practice is nowhere more evident than in the decisions made based on coronary angiographic images. The controversy has raged from the minute Dr. F. Mason Sones Jr. first directly imaged the left coronary artery more than 40 years ago, and it has been compounded by the evolution of technological advances in both the angiographic laboratory and the operating room.
The coronary angiographers are the major players in determining which revascularization path to take—percutaneous coronary intervention (PCI) or coronary artery bypass graft surgery (CABG)—based on their diagnostic findings. They are forced to make the appropriate decision, based not only on the coronary anatomy, but also on the expertise of their surgical colleagues, the patient's choice and clinical status, and, in large part, the perceptions of their own clinical skills. More recently, their decisions are made under pressure from state and federal supervision, insurers, and their own hospital administrators who often have divergent attitudes toward clinical volumes and costs. Not an easy place to sit when all you wanted to do was to treat heart patients.
The recent publication of information from New York State's cardiac diagnostic catheterization database (Circulation 2010;121:267-75) provides some interesting insight into that decision-making process. The authors reported on 16,142 patients catheterized in 19 hospitals during 2005-2007. Catheterization laboratory cardiologists provided interventional recommendations for 10,333 (64%) of these patients. Study subjects ran the spectrum from asymptomatic angina to non–ST-elevation myocardial infarction. Their recommendations were compared with those of the ACC/AHA guidelines and based solely on angiographic findings. Among the 1,337 patients who had indications for CABG, 712 (53%) were recommended CABG and 455 (34%) were recommended for PCI by the angiographer. Among the 6,051 patients with indications for PCI, 5,660 (94%) were recommended for PCI. In the 1,223 patients in whom no intervention was recommended, 261 (21%) received PCI and 70 (6%) underwent CABG. To no one's surprise, there was a strong bias in the direction of PCI.
In an excellent editorial accompanying the report, Dr. Raymond J. Gibbons of the Mayo Clinic in Rochester, Minn., thoughtfully placed these data in the milieu of the contemporary issues surrounding the use and abuse of coronary angiography and interventions (Circulation 2010;121:194-6). He noted the observed bias toward PCI in the analysis, which is to be expected since there is a “tendency for us to believe in what we do.” Considering the data in general, in the closely monitored environment of New York State, the evidence of abuse or overuse was limited to the 27% of patients who went on to PCI or CABG outside of the guidelines. In view of the fact that the analysis did not consider the medical history and concurrent therapy of patients, overuse of interventions appeared to be limited.
Of more concern to Gibbons and this editor is the question of the regional variation in the use of both diagnostic angiography and vascular interventions. In New York State, the performance rate of PCI in different regional health care markets varied between 6.2 and 13.0 interventions per 1,000 Medicare beneficiaries. The New York State rate was similar to that in Rochester, Minn., and Cleveland However, the highest regional PCI rate in New York State was lower than 69 of the 305 health care markets in the United States. Similar variation was observed in the use of CABG, where the highest rate in New York was less than half the rate observed in McAllen, Tex. These wide variations bespeak the potential for decision making that is well outside guideline recommendations. We have expressed in this column that these are “only” guidelines. However, it behooves all of those who are straying that far outside of the guideline recommendations to be certain of the appropriateness of our decisions.
Most of us are not as much under the microscope as our colleagues in New York State. But as the Centers for Medicare and Medicaid Services agency intrudes more into our practice, the microscope likely will be trained on all of us. Finding the best answers to clinical care is not easy. We all become driven by our own personal experiences, but it is helpful to temper our experiences with those of our colleagues.
Getting CME Back on Track
There was a time in the distant past—well, slightly less than a half a century ago—when academic physicians and medical schools took responsibility for the postgraduate education of their alumni and their community doctors. Faculty members were actually sent out to give talks and clinics—without pay. One of the benefits of this process was the communication between the medical center and its community of physicians. Although information was shared, the most important aspect of this interaction was providing a name and a face and a telephone number, so physicians could find help to solve the problems of their patients.
Along the way, something knocked this continuing medical education train off the rails: the pharmaceutical industry. Medical schools and teaching hospitals were quick to pass the responsibilities on to pharma in an atmosphere where the profit motives of both were intermingled. Since then, medical educators have been trying to get that train back on track after realizing the dubious nature of the relationship between industry and CME.
The pharmaceutical industry, under intense pressure from Congress, is pulling back its support for CME. Medical educators are trying to develop a new framework for the support of practicing physicians, in an increasingly complex environment where instant education is critically needed. In some instances, industry is establishing open-ended grants to medicals schools, such as the recent offer by Pfizer to Stanford University (New York Times, Jan. 11, 2010). Critics have rightfully voiced suspicion about this relationship.
Other institutions such as Harvard Medical School have come to realize that their cozy relationships with industry over the last half century may have compromised the medical message. Harvard no longer allows its faculty to give industry-supported lectures, and has limited the fees received by faculty leaders for a variety of services including board member ship (New York Times, Jan. 3, 2010). And not surprisingly, the Institute of Medicine is proposing the creation of a Continuing Professional Development Institute to ensure that the workforce is prepared to provide high-quality and safe care (search cpdi at www.nap.edu
Whatever happened to the idea that the teaching hospitals have a responsibility to provide CME support to their medical communities? This should be particularly important to state medical schools, which have a moral and administrative responsibility to provide an educational framework for physicians to meet their licensure requirements without depending on the pharmaceutical industry or the federal government. Medical systems that provide large parts of community care also have a responsibility to provide an educational structure that supports quality care. Instead of advertising on television, they should spend their money on supporting the needs of the community, and provide the much-needed link between the family doctor and the consultant, without using the emergency department as the conduit.
In the meantime, large gaps are occurring in the CME structure as the pharmaceutical industry withdraws from the arena. Many physicians are turning to the Internet for information. The explosion of new technology and therapy occurring in medicine calls for a major changes in how we provide CME. Missing in many of the proposed CME changes are methodologies to strengthen the communications between the consultant and the primary care doctor. We must meet the challenge, if we are to translate medical research to the bedside and improve the quality of care.
There was a time in the distant past—well, slightly less than a half a century ago—when academic physicians and medical schools took responsibility for the postgraduate education of their alumni and their community doctors. Faculty members were actually sent out to give talks and clinics—without pay. One of the benefits of this process was the communication between the medical center and its community of physicians. Although information was shared, the most important aspect of this interaction was providing a name and a face and a telephone number, so physicians could find help to solve the problems of their patients.
Along the way, something knocked this continuing medical education train off the rails: the pharmaceutical industry. Medical schools and teaching hospitals were quick to pass the responsibilities on to pharma in an atmosphere where the profit motives of both were intermingled. Since then, medical educators have been trying to get that train back on track after realizing the dubious nature of the relationship between industry and CME.
The pharmaceutical industry, under intense pressure from Congress, is pulling back its support for CME. Medical educators are trying to develop a new framework for the support of practicing physicians, in an increasingly complex environment where instant education is critically needed. In some instances, industry is establishing open-ended grants to medicals schools, such as the recent offer by Pfizer to Stanford University (New York Times, Jan. 11, 2010). Critics have rightfully voiced suspicion about this relationship.
Other institutions such as Harvard Medical School have come to realize that their cozy relationships with industry over the last half century may have compromised the medical message. Harvard no longer allows its faculty to give industry-supported lectures, and has limited the fees received by faculty leaders for a variety of services including board member ship (New York Times, Jan. 3, 2010). And not surprisingly, the Institute of Medicine is proposing the creation of a Continuing Professional Development Institute to ensure that the workforce is prepared to provide high-quality and safe care (search cpdi at www.nap.edu
Whatever happened to the idea that the teaching hospitals have a responsibility to provide CME support to their medical communities? This should be particularly important to state medical schools, which have a moral and administrative responsibility to provide an educational framework for physicians to meet their licensure requirements without depending on the pharmaceutical industry or the federal government. Medical systems that provide large parts of community care also have a responsibility to provide an educational structure that supports quality care. Instead of advertising on television, they should spend their money on supporting the needs of the community, and provide the much-needed link between the family doctor and the consultant, without using the emergency department as the conduit.
In the meantime, large gaps are occurring in the CME structure as the pharmaceutical industry withdraws from the arena. Many physicians are turning to the Internet for information. The explosion of new technology and therapy occurring in medicine calls for a major changes in how we provide CME. Missing in many of the proposed CME changes are methodologies to strengthen the communications between the consultant and the primary care doctor. We must meet the challenge, if we are to translate medical research to the bedside and improve the quality of care.
There was a time in the distant past—well, slightly less than a half a century ago—when academic physicians and medical schools took responsibility for the postgraduate education of their alumni and their community doctors. Faculty members were actually sent out to give talks and clinics—without pay. One of the benefits of this process was the communication between the medical center and its community of physicians. Although information was shared, the most important aspect of this interaction was providing a name and a face and a telephone number, so physicians could find help to solve the problems of their patients.
Along the way, something knocked this continuing medical education train off the rails: the pharmaceutical industry. Medical schools and teaching hospitals were quick to pass the responsibilities on to pharma in an atmosphere where the profit motives of both were intermingled. Since then, medical educators have been trying to get that train back on track after realizing the dubious nature of the relationship between industry and CME.
The pharmaceutical industry, under intense pressure from Congress, is pulling back its support for CME. Medical educators are trying to develop a new framework for the support of practicing physicians, in an increasingly complex environment where instant education is critically needed. In some instances, industry is establishing open-ended grants to medicals schools, such as the recent offer by Pfizer to Stanford University (New York Times, Jan. 11, 2010). Critics have rightfully voiced suspicion about this relationship.
Other institutions such as Harvard Medical School have come to realize that their cozy relationships with industry over the last half century may have compromised the medical message. Harvard no longer allows its faculty to give industry-supported lectures, and has limited the fees received by faculty leaders for a variety of services including board member ship (New York Times, Jan. 3, 2010). And not surprisingly, the Institute of Medicine is proposing the creation of a Continuing Professional Development Institute to ensure that the workforce is prepared to provide high-quality and safe care (search cpdi at www.nap.edu
Whatever happened to the idea that the teaching hospitals have a responsibility to provide CME support to their medical communities? This should be particularly important to state medical schools, which have a moral and administrative responsibility to provide an educational framework for physicians to meet their licensure requirements without depending on the pharmaceutical industry or the federal government. Medical systems that provide large parts of community care also have a responsibility to provide an educational structure that supports quality care. Instead of advertising on television, they should spend their money on supporting the needs of the community, and provide the much-needed link between the family doctor and the consultant, without using the emergency department as the conduit.
In the meantime, large gaps are occurring in the CME structure as the pharmaceutical industry withdraws from the arena. Many physicians are turning to the Internet for information. The explosion of new technology and therapy occurring in medicine calls for a major changes in how we provide CME. Missing in many of the proposed CME changes are methodologies to strengthen the communications between the consultant and the primary care doctor. We must meet the challenge, if we are to translate medical research to the bedside and improve the quality of care.
End-of-Life Care: Its Cost in Heart Failure
End-of-life care is considered a factor in the explosion of American health care costs in the past decade, and decreasing its cost is one of the targets included in current health care legislation.
Expenses incurred for end-of-life care are part of the estimated $700 billion wasted in health care annually in the United States. Mitigating these costs can lead to a significant decrease in the cost of health care and insurance premiums.
Cost comparisons of large referral centers such as the Mayo Clinic with hospitals that provide front-line care in urban centers have provided examples of this excess. Health planners have reported that the costs of end-of-life care in referral centers are half as much as those at other hospitals. They have given little credence to the variation in socioeconomic environments in which health care is provided.
The examination of comparative data has emphasized the high costs of technology and an array of expensive consultants who are brought to the bedsides of terminally ill patients. Those studies have suggested that little patient benefit results from these futile and expensive efforts.
All of these end-of-life analyses have consistently used retrospective analysis of patients who have died, examining the cost of their care from hospital admission to death.
A recent analysis of six major teaching hospitals in California considered the issue from a different perspective by “looking forward” or prospectively from the time of admission at the costs and benefits of intensive medical care for patients identified as high risk (Circ. Cardiovasc. Qual. Outcomes 2009;2:548–57).
Researchers examined the relationship of in-hospital resource use on mortality during a 180-day period in 3,999 patients hospitalized for heart failure “looking forward” or prospectively, to 1,639 patients who died during the same time period “looking backward” or retrospectively.
Patients in the two groups were risk adjusted to provide comparability of baseline characteristics.
The investigators found that in a prospective analysis of these teaching hospitals, the increased resource utilization was associated with improved mortality outcomes and lower costs.
The number of days hospitalized also was significantly decreased in the survival study compared with retrospective analysis of the patients who had died.
There was considerable variation in resource use between hospitals but even within the hospitals studied, the institution with the highest cost had the best outcome. In-hospital mortality for the “looking forward” group ranged between 2.2% and 4.7% and the 180-day mortality ranged from 17% to 26%. These rates are very similar to previously reported registry data for heart failure admission.
One might question whether heart failure patients should be used to examine end-of-life issues.
It is not easy for physicians to identify patents who are at high risk upon admission. Many patients who are admitted with severe heart failure improve dramatically with aggressive therapy, and most of them leave the hospital.
Nevertheless, within the population of severely ill heart failure patients there are individuals whose 180-day mortality bespeaks a significant rate that is comparable with patients who have cancer. In fact, it is clear that within the heart failure population, severe mortality populations exist that current therapy has had little impact on and that are difficult to identify upon admission.
The pressure to establish methodologies to limit health care costs within the framework of new health care legislation requires a more sophisticated approach to the modulation of cost.
The analysis cited above emphasizes the complexity of the cost issues that go into choosing care pathways at the bedside. The emphasis on the cost differential between referral centers such as the Mayo Clinic and teaching hospitals that provide acute urban care based on fatal outcomes does not help in the resolution of the therapeutic decision in high-risk patients.
This new analysis raises important questions and provides a methodology that can expand our understanding of the complexities of end-of-life care and its costs. It can identify where efficiencies can be introduced to bring comfort to both our patients and our pocketbooks.
End-of-life care is considered a factor in the explosion of American health care costs in the past decade, and decreasing its cost is one of the targets included in current health care legislation.
Expenses incurred for end-of-life care are part of the estimated $700 billion wasted in health care annually in the United States. Mitigating these costs can lead to a significant decrease in the cost of health care and insurance premiums.
Cost comparisons of large referral centers such as the Mayo Clinic with hospitals that provide front-line care in urban centers have provided examples of this excess. Health planners have reported that the costs of end-of-life care in referral centers are half as much as those at other hospitals. They have given little credence to the variation in socioeconomic environments in which health care is provided.
The examination of comparative data has emphasized the high costs of technology and an array of expensive consultants who are brought to the bedsides of terminally ill patients. Those studies have suggested that little patient benefit results from these futile and expensive efforts.
All of these end-of-life analyses have consistently used retrospective analysis of patients who have died, examining the cost of their care from hospital admission to death.
A recent analysis of six major teaching hospitals in California considered the issue from a different perspective by “looking forward” or prospectively from the time of admission at the costs and benefits of intensive medical care for patients identified as high risk (Circ. Cardiovasc. Qual. Outcomes 2009;2:548–57).
Researchers examined the relationship of in-hospital resource use on mortality during a 180-day period in 3,999 patients hospitalized for heart failure “looking forward” or prospectively, to 1,639 patients who died during the same time period “looking backward” or retrospectively.
Patients in the two groups were risk adjusted to provide comparability of baseline characteristics.
The investigators found that in a prospective analysis of these teaching hospitals, the increased resource utilization was associated with improved mortality outcomes and lower costs.
The number of days hospitalized also was significantly decreased in the survival study compared with retrospective analysis of the patients who had died.
There was considerable variation in resource use between hospitals but even within the hospitals studied, the institution with the highest cost had the best outcome. In-hospital mortality for the “looking forward” group ranged between 2.2% and 4.7% and the 180-day mortality ranged from 17% to 26%. These rates are very similar to previously reported registry data for heart failure admission.
One might question whether heart failure patients should be used to examine end-of-life issues.
It is not easy for physicians to identify patents who are at high risk upon admission. Many patients who are admitted with severe heart failure improve dramatically with aggressive therapy, and most of them leave the hospital.
Nevertheless, within the population of severely ill heart failure patients there are individuals whose 180-day mortality bespeaks a significant rate that is comparable with patients who have cancer. In fact, it is clear that within the heart failure population, severe mortality populations exist that current therapy has had little impact on and that are difficult to identify upon admission.
The pressure to establish methodologies to limit health care costs within the framework of new health care legislation requires a more sophisticated approach to the modulation of cost.
The analysis cited above emphasizes the complexity of the cost issues that go into choosing care pathways at the bedside. The emphasis on the cost differential between referral centers such as the Mayo Clinic and teaching hospitals that provide acute urban care based on fatal outcomes does not help in the resolution of the therapeutic decision in high-risk patients.
This new analysis raises important questions and provides a methodology that can expand our understanding of the complexities of end-of-life care and its costs. It can identify where efficiencies can be introduced to bring comfort to both our patients and our pocketbooks.
End-of-life care is considered a factor in the explosion of American health care costs in the past decade, and decreasing its cost is one of the targets included in current health care legislation.
Expenses incurred for end-of-life care are part of the estimated $700 billion wasted in health care annually in the United States. Mitigating these costs can lead to a significant decrease in the cost of health care and insurance premiums.
Cost comparisons of large referral centers such as the Mayo Clinic with hospitals that provide front-line care in urban centers have provided examples of this excess. Health planners have reported that the costs of end-of-life care in referral centers are half as much as those at other hospitals. They have given little credence to the variation in socioeconomic environments in which health care is provided.
The examination of comparative data has emphasized the high costs of technology and an array of expensive consultants who are brought to the bedsides of terminally ill patients. Those studies have suggested that little patient benefit results from these futile and expensive efforts.
All of these end-of-life analyses have consistently used retrospective analysis of patients who have died, examining the cost of their care from hospital admission to death.
A recent analysis of six major teaching hospitals in California considered the issue from a different perspective by “looking forward” or prospectively from the time of admission at the costs and benefits of intensive medical care for patients identified as high risk (Circ. Cardiovasc. Qual. Outcomes 2009;2:548–57).
Researchers examined the relationship of in-hospital resource use on mortality during a 180-day period in 3,999 patients hospitalized for heart failure “looking forward” or prospectively, to 1,639 patients who died during the same time period “looking backward” or retrospectively.
Patients in the two groups were risk adjusted to provide comparability of baseline characteristics.
The investigators found that in a prospective analysis of these teaching hospitals, the increased resource utilization was associated with improved mortality outcomes and lower costs.
The number of days hospitalized also was significantly decreased in the survival study compared with retrospective analysis of the patients who had died.
There was considerable variation in resource use between hospitals but even within the hospitals studied, the institution with the highest cost had the best outcome. In-hospital mortality for the “looking forward” group ranged between 2.2% and 4.7% and the 180-day mortality ranged from 17% to 26%. These rates are very similar to previously reported registry data for heart failure admission.
One might question whether heart failure patients should be used to examine end-of-life issues.
It is not easy for physicians to identify patents who are at high risk upon admission. Many patients who are admitted with severe heart failure improve dramatically with aggressive therapy, and most of them leave the hospital.
Nevertheless, within the population of severely ill heart failure patients there are individuals whose 180-day mortality bespeaks a significant rate that is comparable with patients who have cancer. In fact, it is clear that within the heart failure population, severe mortality populations exist that current therapy has had little impact on and that are difficult to identify upon admission.
The pressure to establish methodologies to limit health care costs within the framework of new health care legislation requires a more sophisticated approach to the modulation of cost.
The analysis cited above emphasizes the complexity of the cost issues that go into choosing care pathways at the bedside. The emphasis on the cost differential between referral centers such as the Mayo Clinic and teaching hospitals that provide acute urban care based on fatal outcomes does not help in the resolution of the therapeutic decision in high-risk patients.
This new analysis raises important questions and provides a methodology that can expand our understanding of the complexities of end-of-life care and its costs. It can identify where efficiencies can be introduced to bring comfort to both our patients and our pocketbooks.
Comparative Effectiveness: Are We Ready?
The cardiology community, under the leadership of the American College of Cardiology and American Heart Association, has struggled for the last 2 decades with the task of creating appropriateness guidelines for the care of cardiac patients.
The rigorous, open process has been a struggle to provide a scientific foundation for the confirmation of guideline recommendations.
It has taken on a new dimension with the forthcoming health care reform under consideration by Congress, which has given comparative effectiveness research (CER) a major role in establishing the payment parameters for appropriate use of drugs and devices within Medicare. The final construct of the CER process will have an immense impact on how we practice cardiology and will extend well beyond the use of guidelines in our clinical decision making.
It has been estimated that 30% of all medical spending has no discernible benefit. The bill for this useless care totals approximately $700 billion.
To deal with this presumed “waste,” the federal government plans to use CER to gain more data to establish guidelines and recommendations about the efficacy of current therapy primarily in the Medicare population. Because of the size of Medicare, these changes will likely impact the entire insurance industry.
Some political conservatives would suggest that this will result in rationing of care. In fact, this is precisely what is intended. But of course we have had economically imposed rationing for some time.
Ensuring the most judicious use of resources measured by effectiveness and cost is certainly a worthwhile goal. Whether the medical community is now or will ever be ready to fill this role is open to considerable question.
Cardiology guidelines that have been developed have had limited success. At best, they have provided marginal improvement in clinical care (Am. Heart J. 2009;158:546–53). Only 19% of current guidelines are supported by
Even when a clinical trial shows a positive benefit, its effect on clinical care is slow. Rarely is a single trial's demonstration of a drug's efficacy sufficient to change medical care substantially. The development of convincing data that will gain the approval of the Food and Drug Administration costs money and time.
Changes are even more difficult when carrying out comparative trials as CER advocates propose. By their nature, comparative trials in which one therapy is compared with another take large patient numbers.
One of the few comparative trials sponsored by the National Heart, Lung, and Blood Institute—the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial—compared antihypertensive drugs and was planned to provide the final answer to the question of what is the most effective therapy. ALLHAT's outcome had little effect on clinical care and cost more than $100 million. Few comparative trials have been carried out since then because of the expense. Randomized trials sponsored by pharmaceutical companies are planned to test new drugs in comparison to older, accepted medication or a placebo.
The current Congressional plan includes more than $1 billion for studies comparing drugs and devices to “save money and lives.”
It is proposed that a federal health board or a comparative effectiveness agency will be created to institute a process of compliance and implementation of its decisions. CER decisions could become de facto administrative decisions and could determine what care can and should be provided within Medicare. It is anticipated that in some instances, the agency will seek input from specialty societies rather than leaving the decision process to a group of experts in Washington.
How comparative effective research will be carried out is now under review and its ultimate impact on care remains uncertain.
It is certainly unrealistic to presume that we are close to providing the scientific answer to the appropriateness conundrum. Effectiveness is not easily defined, but we can usually define ineffectiveness when we see it. Bridging the gap between these two extremes is easier said than done.
The cardiology community, under the leadership of the American College of Cardiology and American Heart Association, has struggled for the last 2 decades with the task of creating appropriateness guidelines for the care of cardiac patients.
The rigorous, open process has been a struggle to provide a scientific foundation for the confirmation of guideline recommendations.
It has taken on a new dimension with the forthcoming health care reform under consideration by Congress, which has given comparative effectiveness research (CER) a major role in establishing the payment parameters for appropriate use of drugs and devices within Medicare. The final construct of the CER process will have an immense impact on how we practice cardiology and will extend well beyond the use of guidelines in our clinical decision making.
It has been estimated that 30% of all medical spending has no discernible benefit. The bill for this useless care totals approximately $700 billion.
To deal with this presumed “waste,” the federal government plans to use CER to gain more data to establish guidelines and recommendations about the efficacy of current therapy primarily in the Medicare population. Because of the size of Medicare, these changes will likely impact the entire insurance industry.
Some political conservatives would suggest that this will result in rationing of care. In fact, this is precisely what is intended. But of course we have had economically imposed rationing for some time.
Ensuring the most judicious use of resources measured by effectiveness and cost is certainly a worthwhile goal. Whether the medical community is now or will ever be ready to fill this role is open to considerable question.
Cardiology guidelines that have been developed have had limited success. At best, they have provided marginal improvement in clinical care (Am. Heart J. 2009;158:546–53). Only 19% of current guidelines are supported by
Even when a clinical trial shows a positive benefit, its effect on clinical care is slow. Rarely is a single trial's demonstration of a drug's efficacy sufficient to change medical care substantially. The development of convincing data that will gain the approval of the Food and Drug Administration costs money and time.
Changes are even more difficult when carrying out comparative trials as CER advocates propose. By their nature, comparative trials in which one therapy is compared with another take large patient numbers.
One of the few comparative trials sponsored by the National Heart, Lung, and Blood Institute—the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial—compared antihypertensive drugs and was planned to provide the final answer to the question of what is the most effective therapy. ALLHAT's outcome had little effect on clinical care and cost more than $100 million. Few comparative trials have been carried out since then because of the expense. Randomized trials sponsored by pharmaceutical companies are planned to test new drugs in comparison to older, accepted medication or a placebo.
The current Congressional plan includes more than $1 billion for studies comparing drugs and devices to “save money and lives.”
It is proposed that a federal health board or a comparative effectiveness agency will be created to institute a process of compliance and implementation of its decisions. CER decisions could become de facto administrative decisions and could determine what care can and should be provided within Medicare. It is anticipated that in some instances, the agency will seek input from specialty societies rather than leaving the decision process to a group of experts in Washington.
How comparative effective research will be carried out is now under review and its ultimate impact on care remains uncertain.
It is certainly unrealistic to presume that we are close to providing the scientific answer to the appropriateness conundrum. Effectiveness is not easily defined, but we can usually define ineffectiveness when we see it. Bridging the gap between these two extremes is easier said than done.
The cardiology community, under the leadership of the American College of Cardiology and American Heart Association, has struggled for the last 2 decades with the task of creating appropriateness guidelines for the care of cardiac patients.
The rigorous, open process has been a struggle to provide a scientific foundation for the confirmation of guideline recommendations.
It has taken on a new dimension with the forthcoming health care reform under consideration by Congress, which has given comparative effectiveness research (CER) a major role in establishing the payment parameters for appropriate use of drugs and devices within Medicare. The final construct of the CER process will have an immense impact on how we practice cardiology and will extend well beyond the use of guidelines in our clinical decision making.
It has been estimated that 30% of all medical spending has no discernible benefit. The bill for this useless care totals approximately $700 billion.
To deal with this presumed “waste,” the federal government plans to use CER to gain more data to establish guidelines and recommendations about the efficacy of current therapy primarily in the Medicare population. Because of the size of Medicare, these changes will likely impact the entire insurance industry.
Some political conservatives would suggest that this will result in rationing of care. In fact, this is precisely what is intended. But of course we have had economically imposed rationing for some time.
Ensuring the most judicious use of resources measured by effectiveness and cost is certainly a worthwhile goal. Whether the medical community is now or will ever be ready to fill this role is open to considerable question.
Cardiology guidelines that have been developed have had limited success. At best, they have provided marginal improvement in clinical care (Am. Heart J. 2009;158:546–53). Only 19% of current guidelines are supported by
Even when a clinical trial shows a positive benefit, its effect on clinical care is slow. Rarely is a single trial's demonstration of a drug's efficacy sufficient to change medical care substantially. The development of convincing data that will gain the approval of the Food and Drug Administration costs money and time.
Changes are even more difficult when carrying out comparative trials as CER advocates propose. By their nature, comparative trials in which one therapy is compared with another take large patient numbers.
One of the few comparative trials sponsored by the National Heart, Lung, and Blood Institute—the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial—compared antihypertensive drugs and was planned to provide the final answer to the question of what is the most effective therapy. ALLHAT's outcome had little effect on clinical care and cost more than $100 million. Few comparative trials have been carried out since then because of the expense. Randomized trials sponsored by pharmaceutical companies are planned to test new drugs in comparison to older, accepted medication or a placebo.
The current Congressional plan includes more than $1 billion for studies comparing drugs and devices to “save money and lives.”
It is proposed that a federal health board or a comparative effectiveness agency will be created to institute a process of compliance and implementation of its decisions. CER decisions could become de facto administrative decisions and could determine what care can and should be provided within Medicare. It is anticipated that in some instances, the agency will seek input from specialty societies rather than leaving the decision process to a group of experts in Washington.
How comparative effective research will be carried out is now under review and its ultimate impact on care remains uncertain.
It is certainly unrealistic to presume that we are close to providing the scientific answer to the appropriateness conundrum. Effectiveness is not easily defined, but we can usually define ineffectiveness when we see it. Bridging the gap between these two extremes is easier said than done.