User login
Does losartan (Cozaar) slow the progression of renal disease in patients with type 2 diabetes and nephropathy?
BACKGROUND: Interruption of the renin-angiotensin system with angiotensin-converting enzyme (ACE) inhibitors is renoprotective both in patients with type 1 diabetes and in patients without diabetes who have overt nephropathy. This study evaluates the effectiveness of losartan, an angiotensin-receptor blocker (ARB), in slowing the progression of nephropathy in type 2 diabetes.
POPULATION STUDIED: The Reduction of Endpoints in Non-insulin– dependent diabetes with the Angiotensin II Antagonist Losartan (RENAAL) study included 1513 people with type 2 diabetes and nephropathy, ranging in age from 31 to 70 years. Nephropathy was defined as urinary protein excretion of at least 500 mg daily and a serum creatinine of 1.3 to 3.0 mg per dL. Patients were excluded if they had a diagnosis of nondiabetic nephropathy; had experienced a recent myocardial infarction, transient ischemic attack, or stroke; had recently undergone coronary artery bypass grafting or percutaneous coronary angioplasty; or had ever had heart failure.
STUDY DESIGN AND VALIDITY: The RENAAL study was a double-blind randomized placebo-controlled trial in which patients were assigned to receive either losartan 50 to 100 mg or placebo. All patients received other antihypertensive therapy (excluding ACE inhibitors and ARBs) as necessary to maintain a blood pressure level of less than 140/90 mm Hg. The groups were similar at baseline, with a mean serum creatinine was 1.9 mg per dL (standard deviation = 0.5). The patients were followed up for a mean of 3.4 years, and an intention-to-treat analysis was reported. The study methods appeared valid, although concealment of allocation was not described.
OUTCOMES MEASURED: The primary outcome was the combined outcomes of a doubling of the baseline serum creatinine concentration, end-stage renal disease, and death.
RESULTS: Treatment with losartan resulted in a 16% reduction in the primary composite end point (95% confidence interval [CI], 2%-28%; P =.02; number needed to treat [NNT]=28). The risk for doubling of serum creatinine concentration was reduced by 25% (95% CI, 8-39; P =.006; NNT=23). The likelihood of reaching end-stage renal disease was reduced by 28% (95% CI, 11-42; P =.002; NNT=17). Losartan also decreased the level of proteinuria by 35% (P < .001) and the rate of decline of renal function by 18% (P =.01). A 32% reduction in a patient’s first hospitalization for heart failure was observed (P =.005). There was no difference in the composite end point of morbidity and mortality due to cardiovascular causes, adverse events, or overall mortality.
Losartan showed significant renal benefits in patients with type 2 diabetes and nephropathy. Two other recent papers support this finding. In one, irbesartan (Avapro) protected against the progression of nephropathy in patients with type 2 diabetes compared with either amlodipine (Norvasc) or placebo.1 Treatment with irbesartan also reduced the rate of progression to overt nephropathy in hypertensive patients with type 2 diabetes and microalbuminuria.1
It is unknown whether ACE inhibitors induce the same degree of renoprotection as ARBs in patients with type 2 diabetes. However, ACE inhibitors slow progression of nephropathy due to type 1 diabetes and have significant cardiovascular benefits for patients with type 2 diabetes and hypertension. Interestingly, the RENAAL study was stopped early because of a recently published analysis of the Heart Outcome Prevention Evaluation study,2 which focused on the effects of an ACE inhibitor on patients with diabetes and mild renal insufficiency (serum creatinine = 1.4-2.3 mg/dL). That analysis showed that ramipril reduced a combined end point of cardiovascular death, myocardial infarction, or stroke by a hazard ratio of 0.48 (95% CI, 0.26-0.86). Although ARBs are clearly renoprotective in patients with type 2 diabetes, the data do not yet provide a rationale for sacrificing the cardiovascular protection of an ACE inhibitor in this high-risk population. For now, ACE inhibitors should be the first agent for patients with diabetes who have hypertension and renal disease, reserving ARBs for those who cannot tolerate the ACE inhibitors.
BACKGROUND: Interruption of the renin-angiotensin system with angiotensin-converting enzyme (ACE) inhibitors is renoprotective both in patients with type 1 diabetes and in patients without diabetes who have overt nephropathy. This study evaluates the effectiveness of losartan, an angiotensin-receptor blocker (ARB), in slowing the progression of nephropathy in type 2 diabetes.
POPULATION STUDIED: The Reduction of Endpoints in Non-insulin– dependent diabetes with the Angiotensin II Antagonist Losartan (RENAAL) study included 1513 people with type 2 diabetes and nephropathy, ranging in age from 31 to 70 years. Nephropathy was defined as urinary protein excretion of at least 500 mg daily and a serum creatinine of 1.3 to 3.0 mg per dL. Patients were excluded if they had a diagnosis of nondiabetic nephropathy; had experienced a recent myocardial infarction, transient ischemic attack, or stroke; had recently undergone coronary artery bypass grafting or percutaneous coronary angioplasty; or had ever had heart failure.
STUDY DESIGN AND VALIDITY: The RENAAL study was a double-blind randomized placebo-controlled trial in which patients were assigned to receive either losartan 50 to 100 mg or placebo. All patients received other antihypertensive therapy (excluding ACE inhibitors and ARBs) as necessary to maintain a blood pressure level of less than 140/90 mm Hg. The groups were similar at baseline, with a mean serum creatinine was 1.9 mg per dL (standard deviation = 0.5). The patients were followed up for a mean of 3.4 years, and an intention-to-treat analysis was reported. The study methods appeared valid, although concealment of allocation was not described.
OUTCOMES MEASURED: The primary outcome was the combined outcomes of a doubling of the baseline serum creatinine concentration, end-stage renal disease, and death.
RESULTS: Treatment with losartan resulted in a 16% reduction in the primary composite end point (95% confidence interval [CI], 2%-28%; P =.02; number needed to treat [NNT]=28). The risk for doubling of serum creatinine concentration was reduced by 25% (95% CI, 8-39; P =.006; NNT=23). The likelihood of reaching end-stage renal disease was reduced by 28% (95% CI, 11-42; P =.002; NNT=17). Losartan also decreased the level of proteinuria by 35% (P < .001) and the rate of decline of renal function by 18% (P =.01). A 32% reduction in a patient’s first hospitalization for heart failure was observed (P =.005). There was no difference in the composite end point of morbidity and mortality due to cardiovascular causes, adverse events, or overall mortality.
Losartan showed significant renal benefits in patients with type 2 diabetes and nephropathy. Two other recent papers support this finding. In one, irbesartan (Avapro) protected against the progression of nephropathy in patients with type 2 diabetes compared with either amlodipine (Norvasc) or placebo.1 Treatment with irbesartan also reduced the rate of progression to overt nephropathy in hypertensive patients with type 2 diabetes and microalbuminuria.1
It is unknown whether ACE inhibitors induce the same degree of renoprotection as ARBs in patients with type 2 diabetes. However, ACE inhibitors slow progression of nephropathy due to type 1 diabetes and have significant cardiovascular benefits for patients with type 2 diabetes and hypertension. Interestingly, the RENAAL study was stopped early because of a recently published analysis of the Heart Outcome Prevention Evaluation study,2 which focused on the effects of an ACE inhibitor on patients with diabetes and mild renal insufficiency (serum creatinine = 1.4-2.3 mg/dL). That analysis showed that ramipril reduced a combined end point of cardiovascular death, myocardial infarction, or stroke by a hazard ratio of 0.48 (95% CI, 0.26-0.86). Although ARBs are clearly renoprotective in patients with type 2 diabetes, the data do not yet provide a rationale for sacrificing the cardiovascular protection of an ACE inhibitor in this high-risk population. For now, ACE inhibitors should be the first agent for patients with diabetes who have hypertension and renal disease, reserving ARBs for those who cannot tolerate the ACE inhibitors.
BACKGROUND: Interruption of the renin-angiotensin system with angiotensin-converting enzyme (ACE) inhibitors is renoprotective both in patients with type 1 diabetes and in patients without diabetes who have overt nephropathy. This study evaluates the effectiveness of losartan, an angiotensin-receptor blocker (ARB), in slowing the progression of nephropathy in type 2 diabetes.
POPULATION STUDIED: The Reduction of Endpoints in Non-insulin– dependent diabetes with the Angiotensin II Antagonist Losartan (RENAAL) study included 1513 people with type 2 diabetes and nephropathy, ranging in age from 31 to 70 years. Nephropathy was defined as urinary protein excretion of at least 500 mg daily and a serum creatinine of 1.3 to 3.0 mg per dL. Patients were excluded if they had a diagnosis of nondiabetic nephropathy; had experienced a recent myocardial infarction, transient ischemic attack, or stroke; had recently undergone coronary artery bypass grafting or percutaneous coronary angioplasty; or had ever had heart failure.
STUDY DESIGN AND VALIDITY: The RENAAL study was a double-blind randomized placebo-controlled trial in which patients were assigned to receive either losartan 50 to 100 mg or placebo. All patients received other antihypertensive therapy (excluding ACE inhibitors and ARBs) as necessary to maintain a blood pressure level of less than 140/90 mm Hg. The groups were similar at baseline, with a mean serum creatinine was 1.9 mg per dL (standard deviation = 0.5). The patients were followed up for a mean of 3.4 years, and an intention-to-treat analysis was reported. The study methods appeared valid, although concealment of allocation was not described.
OUTCOMES MEASURED: The primary outcome was the combined outcomes of a doubling of the baseline serum creatinine concentration, end-stage renal disease, and death.
RESULTS: Treatment with losartan resulted in a 16% reduction in the primary composite end point (95% confidence interval [CI], 2%-28%; P =.02; number needed to treat [NNT]=28). The risk for doubling of serum creatinine concentration was reduced by 25% (95% CI, 8-39; P =.006; NNT=23). The likelihood of reaching end-stage renal disease was reduced by 28% (95% CI, 11-42; P =.002; NNT=17). Losartan also decreased the level of proteinuria by 35% (P < .001) and the rate of decline of renal function by 18% (P =.01). A 32% reduction in a patient’s first hospitalization for heart failure was observed (P =.005). There was no difference in the composite end point of morbidity and mortality due to cardiovascular causes, adverse events, or overall mortality.
Losartan showed significant renal benefits in patients with type 2 diabetes and nephropathy. Two other recent papers support this finding. In one, irbesartan (Avapro) protected against the progression of nephropathy in patients with type 2 diabetes compared with either amlodipine (Norvasc) or placebo.1 Treatment with irbesartan also reduced the rate of progression to overt nephropathy in hypertensive patients with type 2 diabetes and microalbuminuria.1
It is unknown whether ACE inhibitors induce the same degree of renoprotection as ARBs in patients with type 2 diabetes. However, ACE inhibitors slow progression of nephropathy due to type 1 diabetes and have significant cardiovascular benefits for patients with type 2 diabetes and hypertension. Interestingly, the RENAAL study was stopped early because of a recently published analysis of the Heart Outcome Prevention Evaluation study,2 which focused on the effects of an ACE inhibitor on patients with diabetes and mild renal insufficiency (serum creatinine = 1.4-2.3 mg/dL). That analysis showed that ramipril reduced a combined end point of cardiovascular death, myocardial infarction, or stroke by a hazard ratio of 0.48 (95% CI, 0.26-0.86). Although ARBs are clearly renoprotective in patients with type 2 diabetes, the data do not yet provide a rationale for sacrificing the cardiovascular protection of an ACE inhibitor in this high-risk population. For now, ACE inhibitors should be the first agent for patients with diabetes who have hypertension and renal disease, reserving ARBs for those who cannot tolerate the ACE inhibitors.
Do dietary restrictions reduce fecal occult blood testing adherence?
BACKGROUND: Population-based screening for fecal occult blood has been shown to reduce mortality from colorectal cancer. Unfortunately, low fecal occult blood testing (FOBT) participation rates limit the potential impact of this screening intervention. One reason that patients choose not to complete and return their FOBT cards may be that they have difficulty following the recommended pretesting dietary restrictions. Substances that patients are often instructed to avoid before FOBT include red meat, fresh fruits and vegetables, vitamin C, iron, and nonsteroidal anti-inflammatory drugs.
POPULATION STUDIED: Participants in the studies included American Association of Retired Persons members (n=3783), patients of 32 Canadian Family Physicians (n=5003), patients in a single British general practice (n=153), patients aged 40 to 74 years not otherwise specified (n=634), and Veterans Affairs hospital patients (n=786).
STUDY DESIGN AND VALIDITY: Five randomized controlled trials were included in the meta-analysis. These trials were identified by a structured MEDLINE search augmented by hand searching and by contacting experts. All 5 studies randomized participants to either a dietary restrictions or no dietary restrictions arm before FOBT and reported screening completion rates for each arm. The authors do not specifically mention searching for unpublished trials, an important consideration in meta-analytic studies. Published studies are more likely to have positive results and this publication bias makes it more likely that the meta-analysis results will show a positive effect.
OUTCOMES MEASURED: The primary outcome was the difference between the completion rates of FOBT in the dietary restriction arm and the no dietary restriction arm. As a secondary outcome the investigators looked at positivity rates or the percentage of positive FOBT results among completed tests in each arm. Positivity rates are used here as a proxy for test specificity, which we would expect to be improved by dietary restriction.
RESULTS: When patients were counseled to avoid certain foods before obtaining a fecal sample, completion rates of FOBT across the studies ranged from 18.1% to 80.4%. This wide range of completion rates suggests heterogeneity in the interventions. It also suggests that factors other than dietary restriction may account for most of the difference in completion rates. Only one study showed a significant difference between the dietary restriction arm and the no dietary restriction arm. This study was the smallest of the 5 trials, and it had relatively complex dietary restrictions. There was no significant difference between positivity rates in any of the individual trials or in the pooled difference.
BACKGROUND: Population-based screening for fecal occult blood has been shown to reduce mortality from colorectal cancer. Unfortunately, low fecal occult blood testing (FOBT) participation rates limit the potential impact of this screening intervention. One reason that patients choose not to complete and return their FOBT cards may be that they have difficulty following the recommended pretesting dietary restrictions. Substances that patients are often instructed to avoid before FOBT include red meat, fresh fruits and vegetables, vitamin C, iron, and nonsteroidal anti-inflammatory drugs.
POPULATION STUDIED: Participants in the studies included American Association of Retired Persons members (n=3783), patients of 32 Canadian Family Physicians (n=5003), patients in a single British general practice (n=153), patients aged 40 to 74 years not otherwise specified (n=634), and Veterans Affairs hospital patients (n=786).
STUDY DESIGN AND VALIDITY: Five randomized controlled trials were included in the meta-analysis. These trials were identified by a structured MEDLINE search augmented by hand searching and by contacting experts. All 5 studies randomized participants to either a dietary restrictions or no dietary restrictions arm before FOBT and reported screening completion rates for each arm. The authors do not specifically mention searching for unpublished trials, an important consideration in meta-analytic studies. Published studies are more likely to have positive results and this publication bias makes it more likely that the meta-analysis results will show a positive effect.
OUTCOMES MEASURED: The primary outcome was the difference between the completion rates of FOBT in the dietary restriction arm and the no dietary restriction arm. As a secondary outcome the investigators looked at positivity rates or the percentage of positive FOBT results among completed tests in each arm. Positivity rates are used here as a proxy for test specificity, which we would expect to be improved by dietary restriction.
RESULTS: When patients were counseled to avoid certain foods before obtaining a fecal sample, completion rates of FOBT across the studies ranged from 18.1% to 80.4%. This wide range of completion rates suggests heterogeneity in the interventions. It also suggests that factors other than dietary restriction may account for most of the difference in completion rates. Only one study showed a significant difference between the dietary restriction arm and the no dietary restriction arm. This study was the smallest of the 5 trials, and it had relatively complex dietary restrictions. There was no significant difference between positivity rates in any of the individual trials or in the pooled difference.
BACKGROUND: Population-based screening for fecal occult blood has been shown to reduce mortality from colorectal cancer. Unfortunately, low fecal occult blood testing (FOBT) participation rates limit the potential impact of this screening intervention. One reason that patients choose not to complete and return their FOBT cards may be that they have difficulty following the recommended pretesting dietary restrictions. Substances that patients are often instructed to avoid before FOBT include red meat, fresh fruits and vegetables, vitamin C, iron, and nonsteroidal anti-inflammatory drugs.
POPULATION STUDIED: Participants in the studies included American Association of Retired Persons members (n=3783), patients of 32 Canadian Family Physicians (n=5003), patients in a single British general practice (n=153), patients aged 40 to 74 years not otherwise specified (n=634), and Veterans Affairs hospital patients (n=786).
STUDY DESIGN AND VALIDITY: Five randomized controlled trials were included in the meta-analysis. These trials were identified by a structured MEDLINE search augmented by hand searching and by contacting experts. All 5 studies randomized participants to either a dietary restrictions or no dietary restrictions arm before FOBT and reported screening completion rates for each arm. The authors do not specifically mention searching for unpublished trials, an important consideration in meta-analytic studies. Published studies are more likely to have positive results and this publication bias makes it more likely that the meta-analysis results will show a positive effect.
OUTCOMES MEASURED: The primary outcome was the difference between the completion rates of FOBT in the dietary restriction arm and the no dietary restriction arm. As a secondary outcome the investigators looked at positivity rates or the percentage of positive FOBT results among completed tests in each arm. Positivity rates are used here as a proxy for test specificity, which we would expect to be improved by dietary restriction.
RESULTS: When patients were counseled to avoid certain foods before obtaining a fecal sample, completion rates of FOBT across the studies ranged from 18.1% to 80.4%. This wide range of completion rates suggests heterogeneity in the interventions. It also suggests that factors other than dietary restriction may account for most of the difference in completion rates. Only one study showed a significant difference between the dietary restriction arm and the no dietary restriction arm. This study was the smallest of the 5 trials, and it had relatively complex dietary restrictions. There was no significant difference between positivity rates in any of the individual trials or in the pooled difference.
How common is peripheral arterial disease, and should primary care physicians be screening for it?
Patients at risk for atherosclerotic disease frequently have undiagnosed and asymptomatic PAD. Also, patients with unknown PAD are less intensively treated for hyperlipidemia and hypertension and less likely to be taking antiplatelet therapy than patients already diagnosed with PAD or CVD. This study does not provide evidence, however, that early detection of PAD will lead to behavioral changes on the part of either patients or physicians resulting in improved patient-oriented outcomes. Until further studies have been done that demonstrate improved outcomes as a result of early detection of PAD with the Doppler ABI, screening should not be routine.
Patients at risk for atherosclerotic disease frequently have undiagnosed and asymptomatic PAD. Also, patients with unknown PAD are less intensively treated for hyperlipidemia and hypertension and less likely to be taking antiplatelet therapy than patients already diagnosed with PAD or CVD. This study does not provide evidence, however, that early detection of PAD will lead to behavioral changes on the part of either patients or physicians resulting in improved patient-oriented outcomes. Until further studies have been done that demonstrate improved outcomes as a result of early detection of PAD with the Doppler ABI, screening should not be routine.
Patients at risk for atherosclerotic disease frequently have undiagnosed and asymptomatic PAD. Also, patients with unknown PAD are less intensively treated for hyperlipidemia and hypertension and less likely to be taking antiplatelet therapy than patients already diagnosed with PAD or CVD. This study does not provide evidence, however, that early detection of PAD will lead to behavioral changes on the part of either patients or physicians resulting in improved patient-oriented outcomes. Until further studies have been done that demonstrate improved outcomes as a result of early detection of PAD with the Doppler ABI, screening should not be routine.
What is the diagnostic yield of a standardized sequential clinical evaluation of patients presenting to an emergency department with syncope?
BACKGROUND: Syncope is a very common complaint in primary care and is often very difficult to diagnose. Most previous studies have focused only on high-risk patients and on selected diagnostic tests.
POPULATION STUDIED: All patients were eligible for inclusion who were 18 years or older and who presented to the emergency department (ED) of a large primary and tertiary care teaching hospital with a chief complaint of syncope. Syncope was defined as a sudden transient loss of consciousness with an inability to maintain postural tone, with spontaneous recovery. Patients with a seizure disorder, vertigo, dizziness, coma, or shock were excluded. Of 788 eligible patients, 115 did not complete the standardized evaluation, and 23 refused to participate. The remaining 650 patients ranged in age from 18 to 93 years (mean age = 60 years) and represented both men and women equally.
STUDY DESIGN AND VALIDITY: Patients underwent a standard evaluation including a complete history and physical examination, laboratory evaluation (hematocrit, serum creatine kinase and glucose), electrocardiogram (EKG), testing for orthostatic hypotension, and bilateral carotid massage unless contraindicated. If this approach did not lead to a diagnosis, a second series of tests was conducted: 24-hour Holter monitoring, ambulatory loop monitoring or electrophysiologic studies as guided by an abnormal EKG, or a tilt-table test to identify neurocardiogenic or orthostatic syncope. A committee of 2 internists and a cardiologist reviewed the findings of each case, and explicit and reproducible criteria were used to verify the etiology of the syncope. Some of the diagnoses relied on clinical judgment, since no gold standard reference was available.
OUTCOMES MEASURED: The cause of syncope for each patient based on the sequential evaluation. Follow-up information about mortality and recurrent syncope was obtained at 3 6-month intervals from primary physicians, patients, or their families.
RESULTS: A diagnosis was made in 69% of patients following the initial round of examination. In this group, vasovagal disorders accounted for 53% of the diagnoses, along with orthostatic hypotension (35%), arrhythmia (5%), and other causes (5%). Targeted testing was performed in 67 patients, and the suspected diagnosis was confirmed in an additional 49 patients (8%). Extensive cardiovascular testing of 122 of the remaining 155 patients established a diagnosis in 30 of them through the use of Holter or ambulatory loop monitoring, tilt-table, or electrophysiologic testing. No etiology was found in 92 patients (14%). Overall mortality (9% over 18 months) and sudden death were more common among patients with cardiac causes of syncope compared with other causes of syncope.
Sequential evaluation of patients with syncope is useful in identifying causes for most cases in an unselected patient population. The initial work-up includes a complete history and physical examination, laboratory evaluation, EKG, testing for orthostatic hypotension, and bilateral carotid massage unless contraindicated. These diagnostic maneuvers will lead to diagnosis in 69% of patients and suggests a cause that can be confirmed by selective diagnostic testing in an additional 8%. Undiagnosed patients require further cardiovascular evaluation. In the absence of abnormal EKG findings, other extensive cardiovascular testing has little yield. No diagnosis may be uncovered for 14% of patients. Finally, patients evaluated in the ED most likely represent a different subsample of patients suffering from syncope than those seen in the office; therefore, the diagnostic yields may be different.
BACKGROUND: Syncope is a very common complaint in primary care and is often very difficult to diagnose. Most previous studies have focused only on high-risk patients and on selected diagnostic tests.
POPULATION STUDIED: All patients were eligible for inclusion who were 18 years or older and who presented to the emergency department (ED) of a large primary and tertiary care teaching hospital with a chief complaint of syncope. Syncope was defined as a sudden transient loss of consciousness with an inability to maintain postural tone, with spontaneous recovery. Patients with a seizure disorder, vertigo, dizziness, coma, or shock were excluded. Of 788 eligible patients, 115 did not complete the standardized evaluation, and 23 refused to participate. The remaining 650 patients ranged in age from 18 to 93 years (mean age = 60 years) and represented both men and women equally.
STUDY DESIGN AND VALIDITY: Patients underwent a standard evaluation including a complete history and physical examination, laboratory evaluation (hematocrit, serum creatine kinase and glucose), electrocardiogram (EKG), testing for orthostatic hypotension, and bilateral carotid massage unless contraindicated. If this approach did not lead to a diagnosis, a second series of tests was conducted: 24-hour Holter monitoring, ambulatory loop monitoring or electrophysiologic studies as guided by an abnormal EKG, or a tilt-table test to identify neurocardiogenic or orthostatic syncope. A committee of 2 internists and a cardiologist reviewed the findings of each case, and explicit and reproducible criteria were used to verify the etiology of the syncope. Some of the diagnoses relied on clinical judgment, since no gold standard reference was available.
OUTCOMES MEASURED: The cause of syncope for each patient based on the sequential evaluation. Follow-up information about mortality and recurrent syncope was obtained at 3 6-month intervals from primary physicians, patients, or their families.
RESULTS: A diagnosis was made in 69% of patients following the initial round of examination. In this group, vasovagal disorders accounted for 53% of the diagnoses, along with orthostatic hypotension (35%), arrhythmia (5%), and other causes (5%). Targeted testing was performed in 67 patients, and the suspected diagnosis was confirmed in an additional 49 patients (8%). Extensive cardiovascular testing of 122 of the remaining 155 patients established a diagnosis in 30 of them through the use of Holter or ambulatory loop monitoring, tilt-table, or electrophysiologic testing. No etiology was found in 92 patients (14%). Overall mortality (9% over 18 months) and sudden death were more common among patients with cardiac causes of syncope compared with other causes of syncope.
Sequential evaluation of patients with syncope is useful in identifying causes for most cases in an unselected patient population. The initial work-up includes a complete history and physical examination, laboratory evaluation, EKG, testing for orthostatic hypotension, and bilateral carotid massage unless contraindicated. These diagnostic maneuvers will lead to diagnosis in 69% of patients and suggests a cause that can be confirmed by selective diagnostic testing in an additional 8%. Undiagnosed patients require further cardiovascular evaluation. In the absence of abnormal EKG findings, other extensive cardiovascular testing has little yield. No diagnosis may be uncovered for 14% of patients. Finally, patients evaluated in the ED most likely represent a different subsample of patients suffering from syncope than those seen in the office; therefore, the diagnostic yields may be different.
BACKGROUND: Syncope is a very common complaint in primary care and is often very difficult to diagnose. Most previous studies have focused only on high-risk patients and on selected diagnostic tests.
POPULATION STUDIED: All patients were eligible for inclusion who were 18 years or older and who presented to the emergency department (ED) of a large primary and tertiary care teaching hospital with a chief complaint of syncope. Syncope was defined as a sudden transient loss of consciousness with an inability to maintain postural tone, with spontaneous recovery. Patients with a seizure disorder, vertigo, dizziness, coma, or shock were excluded. Of 788 eligible patients, 115 did not complete the standardized evaluation, and 23 refused to participate. The remaining 650 patients ranged in age from 18 to 93 years (mean age = 60 years) and represented both men and women equally.
STUDY DESIGN AND VALIDITY: Patients underwent a standard evaluation including a complete history and physical examination, laboratory evaluation (hematocrit, serum creatine kinase and glucose), electrocardiogram (EKG), testing for orthostatic hypotension, and bilateral carotid massage unless contraindicated. If this approach did not lead to a diagnosis, a second series of tests was conducted: 24-hour Holter monitoring, ambulatory loop monitoring or electrophysiologic studies as guided by an abnormal EKG, or a tilt-table test to identify neurocardiogenic or orthostatic syncope. A committee of 2 internists and a cardiologist reviewed the findings of each case, and explicit and reproducible criteria were used to verify the etiology of the syncope. Some of the diagnoses relied on clinical judgment, since no gold standard reference was available.
OUTCOMES MEASURED: The cause of syncope for each patient based on the sequential evaluation. Follow-up information about mortality and recurrent syncope was obtained at 3 6-month intervals from primary physicians, patients, or their families.
RESULTS: A diagnosis was made in 69% of patients following the initial round of examination. In this group, vasovagal disorders accounted for 53% of the diagnoses, along with orthostatic hypotension (35%), arrhythmia (5%), and other causes (5%). Targeted testing was performed in 67 patients, and the suspected diagnosis was confirmed in an additional 49 patients (8%). Extensive cardiovascular testing of 122 of the remaining 155 patients established a diagnosis in 30 of them through the use of Holter or ambulatory loop monitoring, tilt-table, or electrophysiologic testing. No etiology was found in 92 patients (14%). Overall mortality (9% over 18 months) and sudden death were more common among patients with cardiac causes of syncope compared with other causes of syncope.
Sequential evaluation of patients with syncope is useful in identifying causes for most cases in an unselected patient population. The initial work-up includes a complete history and physical examination, laboratory evaluation, EKG, testing for orthostatic hypotension, and bilateral carotid massage unless contraindicated. These diagnostic maneuvers will lead to diagnosis in 69% of patients and suggests a cause that can be confirmed by selective diagnostic testing in an additional 8%. Undiagnosed patients require further cardiovascular evaluation. In the absence of abnormal EKG findings, other extensive cardiovascular testing has little yield. No diagnosis may be uncovered for 14% of patients. Finally, patients evaluated in the ED most likely represent a different subsample of patients suffering from syncope than those seen in the office; therefore, the diagnostic yields may be different.
In low to moderate risk patients with chest pain, is a 6-hour protocol able to accurately rule out acute myocardial infarction (AMI)?
BACKGROUND: Many people come to physicians with acute chest pain, and it is difficult to sort out those who have potentially dangerous AMI and unstable angina from the many with more benign conditions that do not require hospital admission. Traditionally, this “rule out” process has required a hospital stay with numerous measurements of cardiac enzymes often taking more than 24 hours and includes electrocardiogram (EKG) monitoring.
POPULATION STUDIED: The researchers of this study enrolled 383 consecutive patients presenting over a 12-month period to an inner-city emergency department in Manchester, England, with chest pain of low to moderate risk of AMI (the end prevalence of AMI in study participants was 18%). Patients were eligible if they were older than 25 years, had chest pain for less than 12 hours, had no history of trauma or other medical causes, no EKG evidence of AMI or ischemia, and no hypotension or arrhythmia.
STUDY DESIGN AND VALIDITY: This is a cohort study that entered consecutive patients in a 6-hour study protocol. This rule-out protocol consisted of continuous 12-lead ST segment monitoring and serial measurement of creatine kinase, myocardial bound (CK-MB) mass. CK-MB was measured 3 hours and 6 hours after the onset of pain, and after 6 hours of monitoring in patients with chest pain of less than 3 hours’ duration. If chest pain had lasted 3 to 12 hours the CK-MB was measured on arrival and 3 hours later. Patients were ruled in if any of the CK-MB test results were positive or if an important change in the ST segment occurred, and ruled out if no changes occurred. Patients with a positive result were admitted, and discharged patients returned 2 days later for measurement of troponin T, which was used as the gold standard. A level of 0.1 μg per mL indicated myocardial damage. Patients were also asked to return to the clinic at 1 month for a history, examination, and EKG.
OUTCOMES MEASURED: The primary outcome was to determine the number of patients with an elevated 2-day troponin T level given a diagnosis of AMI. Patients also followed up at 1 month with examination and EKG, but it is unclear how many did so.
RESULTS: Of the 383 patients who began the study protocol, 368 completed the initial 6-hour assessment. Fifty-three were protocol positive with either elevated CK-MB levels or ST segment changes; by the gold standard tests 18 patients had false-positive results. Only 292 had the follow-up 2-day troponin T value measured, with 11 people withdrawing from follow-up and the 65 others not returning for the 2-day blood test, making study results available for 76% of the original participants. Of the 239 people who had no initial CK-MB or ST segment changes, 238 had negative troponin T values, and one had a borderline increase in troponin T of 1.1 μg per L. The sensitivity of the diagnostic test was 97.2% (95% confidence interval [CI], 95%-99%), and specificity was 93% (95% CI, 90%-96%). At a prevalence of 18%, which may be higher than some primary care populations, the protocol will correctly rule out patients without MI 99.6% of the time and correctly rule in patients 66% of the time (positive likelihood ratio = 13.9; negative likelihood ratio = 0.03).
In this study, an emergency department protocol using a 12-lead EKG and 2 to 3 CK-MB levels correctly identified 99.6% of the 18% of patients with chest pain who were later confirmed to have an AMI. This is promising information, but as family physicians we do not only need to know whether patients have an AMI but also whether the chest pain indicates that they have non–AMI coronary artery disease (eg, unstable angina) that will affect their lives. This kind of protocol can be used to rule out MI, but how to manage thsee patients is still a problem. Some chest pain units that use a similar 6-hour protocol add troponin T levels and do some sort of noninvasive testing, such as exercise stress testing before discharge. In a low-risk population this has a risk of false-positive tests too. Replication of these results in other populations with longer-term follow-up would be useful before this protocol gets widespread use.
BACKGROUND: Many people come to physicians with acute chest pain, and it is difficult to sort out those who have potentially dangerous AMI and unstable angina from the many with more benign conditions that do not require hospital admission. Traditionally, this “rule out” process has required a hospital stay with numerous measurements of cardiac enzymes often taking more than 24 hours and includes electrocardiogram (EKG) monitoring.
POPULATION STUDIED: The researchers of this study enrolled 383 consecutive patients presenting over a 12-month period to an inner-city emergency department in Manchester, England, with chest pain of low to moderate risk of AMI (the end prevalence of AMI in study participants was 18%). Patients were eligible if they were older than 25 years, had chest pain for less than 12 hours, had no history of trauma or other medical causes, no EKG evidence of AMI or ischemia, and no hypotension or arrhythmia.
STUDY DESIGN AND VALIDITY: This is a cohort study that entered consecutive patients in a 6-hour study protocol. This rule-out protocol consisted of continuous 12-lead ST segment monitoring and serial measurement of creatine kinase, myocardial bound (CK-MB) mass. CK-MB was measured 3 hours and 6 hours after the onset of pain, and after 6 hours of monitoring in patients with chest pain of less than 3 hours’ duration. If chest pain had lasted 3 to 12 hours the CK-MB was measured on arrival and 3 hours later. Patients were ruled in if any of the CK-MB test results were positive or if an important change in the ST segment occurred, and ruled out if no changes occurred. Patients with a positive result were admitted, and discharged patients returned 2 days later for measurement of troponin T, which was used as the gold standard. A level of 0.1 μg per mL indicated myocardial damage. Patients were also asked to return to the clinic at 1 month for a history, examination, and EKG.
OUTCOMES MEASURED: The primary outcome was to determine the number of patients with an elevated 2-day troponin T level given a diagnosis of AMI. Patients also followed up at 1 month with examination and EKG, but it is unclear how many did so.
RESULTS: Of the 383 patients who began the study protocol, 368 completed the initial 6-hour assessment. Fifty-three were protocol positive with either elevated CK-MB levels or ST segment changes; by the gold standard tests 18 patients had false-positive results. Only 292 had the follow-up 2-day troponin T value measured, with 11 people withdrawing from follow-up and the 65 others not returning for the 2-day blood test, making study results available for 76% of the original participants. Of the 239 people who had no initial CK-MB or ST segment changes, 238 had negative troponin T values, and one had a borderline increase in troponin T of 1.1 μg per L. The sensitivity of the diagnostic test was 97.2% (95% confidence interval [CI], 95%-99%), and specificity was 93% (95% CI, 90%-96%). At a prevalence of 18%, which may be higher than some primary care populations, the protocol will correctly rule out patients without MI 99.6% of the time and correctly rule in patients 66% of the time (positive likelihood ratio = 13.9; negative likelihood ratio = 0.03).
In this study, an emergency department protocol using a 12-lead EKG and 2 to 3 CK-MB levels correctly identified 99.6% of the 18% of patients with chest pain who were later confirmed to have an AMI. This is promising information, but as family physicians we do not only need to know whether patients have an AMI but also whether the chest pain indicates that they have non–AMI coronary artery disease (eg, unstable angina) that will affect their lives. This kind of protocol can be used to rule out MI, but how to manage thsee patients is still a problem. Some chest pain units that use a similar 6-hour protocol add troponin T levels and do some sort of noninvasive testing, such as exercise stress testing before discharge. In a low-risk population this has a risk of false-positive tests too. Replication of these results in other populations with longer-term follow-up would be useful before this protocol gets widespread use.
BACKGROUND: Many people come to physicians with acute chest pain, and it is difficult to sort out those who have potentially dangerous AMI and unstable angina from the many with more benign conditions that do not require hospital admission. Traditionally, this “rule out” process has required a hospital stay with numerous measurements of cardiac enzymes often taking more than 24 hours and includes electrocardiogram (EKG) monitoring.
POPULATION STUDIED: The researchers of this study enrolled 383 consecutive patients presenting over a 12-month period to an inner-city emergency department in Manchester, England, with chest pain of low to moderate risk of AMI (the end prevalence of AMI in study participants was 18%). Patients were eligible if they were older than 25 years, had chest pain for less than 12 hours, had no history of trauma or other medical causes, no EKG evidence of AMI or ischemia, and no hypotension or arrhythmia.
STUDY DESIGN AND VALIDITY: This is a cohort study that entered consecutive patients in a 6-hour study protocol. This rule-out protocol consisted of continuous 12-lead ST segment monitoring and serial measurement of creatine kinase, myocardial bound (CK-MB) mass. CK-MB was measured 3 hours and 6 hours after the onset of pain, and after 6 hours of monitoring in patients with chest pain of less than 3 hours’ duration. If chest pain had lasted 3 to 12 hours the CK-MB was measured on arrival and 3 hours later. Patients were ruled in if any of the CK-MB test results were positive or if an important change in the ST segment occurred, and ruled out if no changes occurred. Patients with a positive result were admitted, and discharged patients returned 2 days later for measurement of troponin T, which was used as the gold standard. A level of 0.1 μg per mL indicated myocardial damage. Patients were also asked to return to the clinic at 1 month for a history, examination, and EKG.
OUTCOMES MEASURED: The primary outcome was to determine the number of patients with an elevated 2-day troponin T level given a diagnosis of AMI. Patients also followed up at 1 month with examination and EKG, but it is unclear how many did so.
RESULTS: Of the 383 patients who began the study protocol, 368 completed the initial 6-hour assessment. Fifty-three were protocol positive with either elevated CK-MB levels or ST segment changes; by the gold standard tests 18 patients had false-positive results. Only 292 had the follow-up 2-day troponin T value measured, with 11 people withdrawing from follow-up and the 65 others not returning for the 2-day blood test, making study results available for 76% of the original participants. Of the 239 people who had no initial CK-MB or ST segment changes, 238 had negative troponin T values, and one had a borderline increase in troponin T of 1.1 μg per L. The sensitivity of the diagnostic test was 97.2% (95% confidence interval [CI], 95%-99%), and specificity was 93% (95% CI, 90%-96%). At a prevalence of 18%, which may be higher than some primary care populations, the protocol will correctly rule out patients without MI 99.6% of the time and correctly rule in patients 66% of the time (positive likelihood ratio = 13.9; negative likelihood ratio = 0.03).
In this study, an emergency department protocol using a 12-lead EKG and 2 to 3 CK-MB levels correctly identified 99.6% of the 18% of patients with chest pain who were later confirmed to have an AMI. This is promising information, but as family physicians we do not only need to know whether patients have an AMI but also whether the chest pain indicates that they have non–AMI coronary artery disease (eg, unstable angina) that will affect their lives. This kind of protocol can be used to rule out MI, but how to manage thsee patients is still a problem. Some chest pain units that use a similar 6-hour protocol add troponin T levels and do some sort of noninvasive testing, such as exercise stress testing before discharge. In a low-risk population this has a risk of false-positive tests too. Replication of these results in other populations with longer-term follow-up would be useful before this protocol gets widespread use.
Does episiotomy increase perineal laceration length in primiparous women?
BACKGROUND: Episiotomy was initially used based on theoretical benefit, with little evidence supporting claims that it prevented severe perineal lacerations or pelvic floor dysfunction. As principles of evidence-based medicine have begun to influence obstetrical practice, the utility of routine episiotomy has been called into question. Several observational studies have suggested that episiotomy increases the risk of third- and fourth-degree lacerations. A recent Cochrane review of 6 randomized controlled clinical trials comparing routine versus restricted use of episiotomy showed that episiotomy was associated with more second-degree perineal trauma, without significant differences in dyspareunia, severe perineal trauma, or severe pain. Although all but one of the trials included in the review used mediolateral episiotomy, the one randomized trial conducted in North America (which used midline episiotomy) showed similar results. Despite these data, episiotomy remains a common practice performed in more than 40% of deliveries in the United States.
POPULATION STUDIED: The authors of this study enrolled 80 pregnant women at term who had not had previous vaginal deliveries. The 62 who went on to have vaginal deliveries were included in the analysis. The participants’ mean age was 26.3 years. The majority (92%) had prenatal care, and most (88%) had epidural analgesia during labor. Approximately one fourth of the women (28%) had forceps or vacuum-assisted delivery. A few had malpresentations, with 6% in the occiput posterior position.
STUDY DESIGN AND VALIDITY: This small observational study looked at a range of variables hypothesized to be related to perineal laceration length, including maternal demographics, size of genital hiatus and perineal body, fetal size and presentation, duration of second stage of labor, level of experience of birth attendant, operative vaginal delivery, and episiotomy. After delivery, one of the study authors measured perineal laceration length, and for 10 patients 3 additional observers measured laceration length to assess inter-rater reliability. Observers were blinded to one another’s measurements but not to the other variables included in the analysis. The authors used logistic regression and Mann-Whitney U test to determine which variables were associated with laceration length.
OUTCOMES MEASURED: Perineal laceration length was the primary outcome measured in this study. The authors also assessed laceration severity. The study did not include variables relevant to quality of life, such as healing complications, severity of pain, duration of symptoms, dyspareunia, or incontinence.
RESULTS: Of the 62 patients in the final analysis, 76% had a perineal laceration, with a median length of 4 cm. Five patients (8%) had a third-degree laceration, and one patient (2%) had a fourth-degree laceration. Approximately half (44%) had an episiotomy. The mean laceration length was 3 cm longer for patients who had an episiotomy (4.9 cm vs 1.9 cm; P < 001). Patients who had a forceps- or vacuum-assisted delivery had a longer average length of laceration, but this association was not independent of episiotomy. When assisted deliveries were excluded from the analysis, the association between episiotomy and laceration length remained significant.
This study provides weak evidence that episiotomy increases perineal laceration length in primiparous women. Earlier higher-quality trials provide strong evidence that episiotomy should not be performed routinely. Its use should be restricted to situations in which specific clinical indications exist. In some institutions episiotomy remains common practice despite data that have been available for more than a decade showing that it does not improve outcomes. This suggests the need for further educational interventions on how to attend deliveries in primiparous women without using episiotomy.
BACKGROUND: Episiotomy was initially used based on theoretical benefit, with little evidence supporting claims that it prevented severe perineal lacerations or pelvic floor dysfunction. As principles of evidence-based medicine have begun to influence obstetrical practice, the utility of routine episiotomy has been called into question. Several observational studies have suggested that episiotomy increases the risk of third- and fourth-degree lacerations. A recent Cochrane review of 6 randomized controlled clinical trials comparing routine versus restricted use of episiotomy showed that episiotomy was associated with more second-degree perineal trauma, without significant differences in dyspareunia, severe perineal trauma, or severe pain. Although all but one of the trials included in the review used mediolateral episiotomy, the one randomized trial conducted in North America (which used midline episiotomy) showed similar results. Despite these data, episiotomy remains a common practice performed in more than 40% of deliveries in the United States.
POPULATION STUDIED: The authors of this study enrolled 80 pregnant women at term who had not had previous vaginal deliveries. The 62 who went on to have vaginal deliveries were included in the analysis. The participants’ mean age was 26.3 years. The majority (92%) had prenatal care, and most (88%) had epidural analgesia during labor. Approximately one fourth of the women (28%) had forceps or vacuum-assisted delivery. A few had malpresentations, with 6% in the occiput posterior position.
STUDY DESIGN AND VALIDITY: This small observational study looked at a range of variables hypothesized to be related to perineal laceration length, including maternal demographics, size of genital hiatus and perineal body, fetal size and presentation, duration of second stage of labor, level of experience of birth attendant, operative vaginal delivery, and episiotomy. After delivery, one of the study authors measured perineal laceration length, and for 10 patients 3 additional observers measured laceration length to assess inter-rater reliability. Observers were blinded to one another’s measurements but not to the other variables included in the analysis. The authors used logistic regression and Mann-Whitney U test to determine which variables were associated with laceration length.
OUTCOMES MEASURED: Perineal laceration length was the primary outcome measured in this study. The authors also assessed laceration severity. The study did not include variables relevant to quality of life, such as healing complications, severity of pain, duration of symptoms, dyspareunia, or incontinence.
RESULTS: Of the 62 patients in the final analysis, 76% had a perineal laceration, with a median length of 4 cm. Five patients (8%) had a third-degree laceration, and one patient (2%) had a fourth-degree laceration. Approximately half (44%) had an episiotomy. The mean laceration length was 3 cm longer for patients who had an episiotomy (4.9 cm vs 1.9 cm; P < 001). Patients who had a forceps- or vacuum-assisted delivery had a longer average length of laceration, but this association was not independent of episiotomy. When assisted deliveries were excluded from the analysis, the association between episiotomy and laceration length remained significant.
This study provides weak evidence that episiotomy increases perineal laceration length in primiparous women. Earlier higher-quality trials provide strong evidence that episiotomy should not be performed routinely. Its use should be restricted to situations in which specific clinical indications exist. In some institutions episiotomy remains common practice despite data that have been available for more than a decade showing that it does not improve outcomes. This suggests the need for further educational interventions on how to attend deliveries in primiparous women without using episiotomy.
BACKGROUND: Episiotomy was initially used based on theoretical benefit, with little evidence supporting claims that it prevented severe perineal lacerations or pelvic floor dysfunction. As principles of evidence-based medicine have begun to influence obstetrical practice, the utility of routine episiotomy has been called into question. Several observational studies have suggested that episiotomy increases the risk of third- and fourth-degree lacerations. A recent Cochrane review of 6 randomized controlled clinical trials comparing routine versus restricted use of episiotomy showed that episiotomy was associated with more second-degree perineal trauma, without significant differences in dyspareunia, severe perineal trauma, or severe pain. Although all but one of the trials included in the review used mediolateral episiotomy, the one randomized trial conducted in North America (which used midline episiotomy) showed similar results. Despite these data, episiotomy remains a common practice performed in more than 40% of deliveries in the United States.
POPULATION STUDIED: The authors of this study enrolled 80 pregnant women at term who had not had previous vaginal deliveries. The 62 who went on to have vaginal deliveries were included in the analysis. The participants’ mean age was 26.3 years. The majority (92%) had prenatal care, and most (88%) had epidural analgesia during labor. Approximately one fourth of the women (28%) had forceps or vacuum-assisted delivery. A few had malpresentations, with 6% in the occiput posterior position.
STUDY DESIGN AND VALIDITY: This small observational study looked at a range of variables hypothesized to be related to perineal laceration length, including maternal demographics, size of genital hiatus and perineal body, fetal size and presentation, duration of second stage of labor, level of experience of birth attendant, operative vaginal delivery, and episiotomy. After delivery, one of the study authors measured perineal laceration length, and for 10 patients 3 additional observers measured laceration length to assess inter-rater reliability. Observers were blinded to one another’s measurements but not to the other variables included in the analysis. The authors used logistic regression and Mann-Whitney U test to determine which variables were associated with laceration length.
OUTCOMES MEASURED: Perineal laceration length was the primary outcome measured in this study. The authors also assessed laceration severity. The study did not include variables relevant to quality of life, such as healing complications, severity of pain, duration of symptoms, dyspareunia, or incontinence.
RESULTS: Of the 62 patients in the final analysis, 76% had a perineal laceration, with a median length of 4 cm. Five patients (8%) had a third-degree laceration, and one patient (2%) had a fourth-degree laceration. Approximately half (44%) had an episiotomy. The mean laceration length was 3 cm longer for patients who had an episiotomy (4.9 cm vs 1.9 cm; P < 001). Patients who had a forceps- or vacuum-assisted delivery had a longer average length of laceration, but this association was not independent of episiotomy. When assisted deliveries were excluded from the analysis, the association between episiotomy and laceration length remained significant.
This study provides weak evidence that episiotomy increases perineal laceration length in primiparous women. Earlier higher-quality trials provide strong evidence that episiotomy should not be performed routinely. Its use should be restricted to situations in which specific clinical indications exist. In some institutions episiotomy remains common practice despite data that have been available for more than a decade showing that it does not improve outcomes. This suggests the need for further educational interventions on how to attend deliveries in primiparous women without using episiotomy.
Is tolterodine (Detrol) or oxybutynin (Ditropan) the best for treatment of urge urinary incontinence?
BACKGROUND: Urge urinary incontinence has drawn attention recently, with a number of studies looking at which treatment provides the best results with the fewest side effects. The authors of this study performed a meta-analysis comparing treatment outcomes and side effects for short-acting oxybutynin and tolterodine.
POPULATION STUDIED: The trials included in this meta-analysis studied patients older than 18 years and who were complaining of urge incontinence or an association of frequency (> 8 times per day) and urgency, or had received a diagnosis of detrusor instability. Patients were excluded who had used co-interventions within the 14 days preceding the trial. No further information was available on the populations studied, making it difficult to determine if the patients were similar to those of a primary care practice.
STUDY DESIGN AND VALIDITY: The authors conducted a rigorous literature search without language constraint for published and unpublished studies that were randomized or quasirandomized and double blinded comparing tolterodine with oxybutynin. At least one arm of each study needed to be randomized to 1 to 2 mg tolterodine twice daily and the other arm to 2.5 to 5 mg of oxybutynin 3 times daily. Two independent reviewers decided which trials would be considered in the analysis according to priori eligibility criteria.
OUTCOMES MEASURED: The primary outcomes included the number of incontinent episodes per 24-hour period, the quantity of pads used per 24 hours, the number of micturitions per 24 hours, and the mean voided volume per micturition. Secondary outcomes included the number of patients with side effects and withdrawals attributed to side effects, the number of patients changing dose, urologic measurements, and quality of life.
RESULTS: Oxybutynin produced a statistically and clinically significant decrease in the number of incontinent episodes per 24-hour period (weighted mean difference = 0.41; 95% confidence interval [CI], 0.04-0.77). Both drugs decreased the number of episodes, but the oxybutynin-treated group averaged 0.5 fewer episodes per day. Patients taking tolterodine reported significantly less dry mouth (relative risk [RR] = 0.54; 95% CI, 0.48-0.61) and less moderate to severe dry mouth (RR=0.33; 95% CI, 0.24-0.45). The risk of withdrawing from the study because of side effects was decreased by 37% in the tolterodine group (RR=0.63; 95% CI, 0.46-0.88).
Oxybutynin is superior to tolterodine in efficacy, causing nearly one half fewer episodes of urinary incontinence per day. Tolterodine is better tolerated with less moderate-to-severe dry mouth and fewer dropouts because of medication side effects. For now, oxybutynin should be the first-line choice, since it is available generically and is considerably less expensive (approximately $20 per month for oxybutynin vs $75 per month for tolterodine). Tolterodine or extended-release oxybutynin should be used for those who cannot tolerate this medication because of side effects.
BACKGROUND: Urge urinary incontinence has drawn attention recently, with a number of studies looking at which treatment provides the best results with the fewest side effects. The authors of this study performed a meta-analysis comparing treatment outcomes and side effects for short-acting oxybutynin and tolterodine.
POPULATION STUDIED: The trials included in this meta-analysis studied patients older than 18 years and who were complaining of urge incontinence or an association of frequency (> 8 times per day) and urgency, or had received a diagnosis of detrusor instability. Patients were excluded who had used co-interventions within the 14 days preceding the trial. No further information was available on the populations studied, making it difficult to determine if the patients were similar to those of a primary care practice.
STUDY DESIGN AND VALIDITY: The authors conducted a rigorous literature search without language constraint for published and unpublished studies that were randomized or quasirandomized and double blinded comparing tolterodine with oxybutynin. At least one arm of each study needed to be randomized to 1 to 2 mg tolterodine twice daily and the other arm to 2.5 to 5 mg of oxybutynin 3 times daily. Two independent reviewers decided which trials would be considered in the analysis according to priori eligibility criteria.
OUTCOMES MEASURED: The primary outcomes included the number of incontinent episodes per 24-hour period, the quantity of pads used per 24 hours, the number of micturitions per 24 hours, and the mean voided volume per micturition. Secondary outcomes included the number of patients with side effects and withdrawals attributed to side effects, the number of patients changing dose, urologic measurements, and quality of life.
RESULTS: Oxybutynin produced a statistically and clinically significant decrease in the number of incontinent episodes per 24-hour period (weighted mean difference = 0.41; 95% confidence interval [CI], 0.04-0.77). Both drugs decreased the number of episodes, but the oxybutynin-treated group averaged 0.5 fewer episodes per day. Patients taking tolterodine reported significantly less dry mouth (relative risk [RR] = 0.54; 95% CI, 0.48-0.61) and less moderate to severe dry mouth (RR=0.33; 95% CI, 0.24-0.45). The risk of withdrawing from the study because of side effects was decreased by 37% in the tolterodine group (RR=0.63; 95% CI, 0.46-0.88).
Oxybutynin is superior to tolterodine in efficacy, causing nearly one half fewer episodes of urinary incontinence per day. Tolterodine is better tolerated with less moderate-to-severe dry mouth and fewer dropouts because of medication side effects. For now, oxybutynin should be the first-line choice, since it is available generically and is considerably less expensive (approximately $20 per month for oxybutynin vs $75 per month for tolterodine). Tolterodine or extended-release oxybutynin should be used for those who cannot tolerate this medication because of side effects.
BACKGROUND: Urge urinary incontinence has drawn attention recently, with a number of studies looking at which treatment provides the best results with the fewest side effects. The authors of this study performed a meta-analysis comparing treatment outcomes and side effects for short-acting oxybutynin and tolterodine.
POPULATION STUDIED: The trials included in this meta-analysis studied patients older than 18 years and who were complaining of urge incontinence or an association of frequency (> 8 times per day) and urgency, or had received a diagnosis of detrusor instability. Patients were excluded who had used co-interventions within the 14 days preceding the trial. No further information was available on the populations studied, making it difficult to determine if the patients were similar to those of a primary care practice.
STUDY DESIGN AND VALIDITY: The authors conducted a rigorous literature search without language constraint for published and unpublished studies that were randomized or quasirandomized and double blinded comparing tolterodine with oxybutynin. At least one arm of each study needed to be randomized to 1 to 2 mg tolterodine twice daily and the other arm to 2.5 to 5 mg of oxybutynin 3 times daily. Two independent reviewers decided which trials would be considered in the analysis according to priori eligibility criteria.
OUTCOMES MEASURED: The primary outcomes included the number of incontinent episodes per 24-hour period, the quantity of pads used per 24 hours, the number of micturitions per 24 hours, and the mean voided volume per micturition. Secondary outcomes included the number of patients with side effects and withdrawals attributed to side effects, the number of patients changing dose, urologic measurements, and quality of life.
RESULTS: Oxybutynin produced a statistically and clinically significant decrease in the number of incontinent episodes per 24-hour period (weighted mean difference = 0.41; 95% confidence interval [CI], 0.04-0.77). Both drugs decreased the number of episodes, but the oxybutynin-treated group averaged 0.5 fewer episodes per day. Patients taking tolterodine reported significantly less dry mouth (relative risk [RR] = 0.54; 95% CI, 0.48-0.61) and less moderate to severe dry mouth (RR=0.33; 95% CI, 0.24-0.45). The risk of withdrawing from the study because of side effects was decreased by 37% in the tolterodine group (RR=0.63; 95% CI, 0.46-0.88).
Oxybutynin is superior to tolterodine in efficacy, causing nearly one half fewer episodes of urinary incontinence per day. Tolterodine is better tolerated with less moderate-to-severe dry mouth and fewer dropouts because of medication side effects. For now, oxybutynin should be the first-line choice, since it is available generically and is considerably less expensive (approximately $20 per month for oxybutynin vs $75 per month for tolterodine). Tolterodine or extended-release oxybutynin should be used for those who cannot tolerate this medication because of side effects.
Is breast self-examination an effective screening measure for breast cancer?
BACKGROUND: Medical professionals routinely teach women breast self-examination (BSE) as a screen for detecting breast cancer, yet there are conflicting recommendations regarding BSE from different professional organizations. One study has shown that only 7.6% of women with breast tumors who were practicing regular BSE actually detected the tumor by means of self-examination. Different studies have estimated the sensitivity of BSE as between 26% and 89% and the specificity between 66% and 81%. The US Preventive Services Task Force found insufficient evidence to recommend for or against teaching BSE. The purpose of this review was to evaluate the evidence relating to the effectiveness of BSE in preventing death of breast cancer to make recommendations for the Canadian Task Force on Preventive Health Care.
POPULATION STUDIED: The studies’ populations included women between ages 31 and 64 years in one study and from 40 to 69 years in the other studies. These women were from multiple areas of the world, including Shanghai, Russia, the United Kingdom, Canada, the United States, and Finland.
STUDY DESIGN AND VALIDITY: his is a systematic review of articles found using an electronic database search of abstracts and reports published from 1966 to October 2000. The author identified 2 randomized controlled trials, one quasi-randomized controlled trial, 2 large cohort studies, and several case-control studies that evaluated the effects of BSE on breast cancer outcomes. The 2 randomized-controlled trials and the quasi-randomized controlled trial were large studies enrolling more than 625,000 women with at least 5 years of follow-up and confirmation that women in the BSE group actually learned how to perform the maneuver. One of the cohort studies appeared to have a significant selection bias, rendering its results difficult to interpret. Since different designs were used in the review, the studies could not be combined using meta-analysis.
OUTCOMES MEASURED: The prevention of death resulting from breast cancer was viewed as the most important outcome. Other outcomes examined included the rate of benign biopsy results, the number of patient visits for breast complaints, the stage of cancer detected, and psychological benefits and harms.
RESULTS: Of the 8 studies included in this review, 7 studies, including the 2 randomized controlled trials and the quasi-randomized controlled trial, found no difference between groups taught BSE and the control groups with regard to rates of breast cancer diagnosis, breast cancer death, or in tumor stage or size. In the 2 randomized controlled trials and the quasi-randomized controlled trial, there were higher rates of benign biopsy results in the BSE groups, approximately 1 additional biopsy for every 200 women performing BSE. Women in the BSE groups also presented to the physician’s office more frequently for breast complaints.
This systematic review shows that BSE does not improve key health outcomes for women aged 40 to 70 years, and results in unnecessary biopsies, physician visits, and worry. This does not mean that women should ignore lumps that are detected incidentally, but that BSE teaching should be excluded from periodic health examination of women in this age group. Because all but one of the studies looked exclusively at women older than 40 years and younger than 70 years, no recommendations can be made for younger or older women. When asked by patients about BSE, the best approach is to relay the facts: (1) BSE has not been shown to improve breast cancer mortality; (2) BSE increases the number of physician visits for the evaluation of benign breast lesions; and (3) BSE increases the rate of benign biopsy results.
BACKGROUND: Medical professionals routinely teach women breast self-examination (BSE) as a screen for detecting breast cancer, yet there are conflicting recommendations regarding BSE from different professional organizations. One study has shown that only 7.6% of women with breast tumors who were practicing regular BSE actually detected the tumor by means of self-examination. Different studies have estimated the sensitivity of BSE as between 26% and 89% and the specificity between 66% and 81%. The US Preventive Services Task Force found insufficient evidence to recommend for or against teaching BSE. The purpose of this review was to evaluate the evidence relating to the effectiveness of BSE in preventing death of breast cancer to make recommendations for the Canadian Task Force on Preventive Health Care.
POPULATION STUDIED: The studies’ populations included women between ages 31 and 64 years in one study and from 40 to 69 years in the other studies. These women were from multiple areas of the world, including Shanghai, Russia, the United Kingdom, Canada, the United States, and Finland.
STUDY DESIGN AND VALIDITY: his is a systematic review of articles found using an electronic database search of abstracts and reports published from 1966 to October 2000. The author identified 2 randomized controlled trials, one quasi-randomized controlled trial, 2 large cohort studies, and several case-control studies that evaluated the effects of BSE on breast cancer outcomes. The 2 randomized-controlled trials and the quasi-randomized controlled trial were large studies enrolling more than 625,000 women with at least 5 years of follow-up and confirmation that women in the BSE group actually learned how to perform the maneuver. One of the cohort studies appeared to have a significant selection bias, rendering its results difficult to interpret. Since different designs were used in the review, the studies could not be combined using meta-analysis.
OUTCOMES MEASURED: The prevention of death resulting from breast cancer was viewed as the most important outcome. Other outcomes examined included the rate of benign biopsy results, the number of patient visits for breast complaints, the stage of cancer detected, and psychological benefits and harms.
RESULTS: Of the 8 studies included in this review, 7 studies, including the 2 randomized controlled trials and the quasi-randomized controlled trial, found no difference between groups taught BSE and the control groups with regard to rates of breast cancer diagnosis, breast cancer death, or in tumor stage or size. In the 2 randomized controlled trials and the quasi-randomized controlled trial, there were higher rates of benign biopsy results in the BSE groups, approximately 1 additional biopsy for every 200 women performing BSE. Women in the BSE groups also presented to the physician’s office more frequently for breast complaints.
This systematic review shows that BSE does not improve key health outcomes for women aged 40 to 70 years, and results in unnecessary biopsies, physician visits, and worry. This does not mean that women should ignore lumps that are detected incidentally, but that BSE teaching should be excluded from periodic health examination of women in this age group. Because all but one of the studies looked exclusively at women older than 40 years and younger than 70 years, no recommendations can be made for younger or older women. When asked by patients about BSE, the best approach is to relay the facts: (1) BSE has not been shown to improve breast cancer mortality; (2) BSE increases the number of physician visits for the evaluation of benign breast lesions; and (3) BSE increases the rate of benign biopsy results.
BACKGROUND: Medical professionals routinely teach women breast self-examination (BSE) as a screen for detecting breast cancer, yet there are conflicting recommendations regarding BSE from different professional organizations. One study has shown that only 7.6% of women with breast tumors who were practicing regular BSE actually detected the tumor by means of self-examination. Different studies have estimated the sensitivity of BSE as between 26% and 89% and the specificity between 66% and 81%. The US Preventive Services Task Force found insufficient evidence to recommend for or against teaching BSE. The purpose of this review was to evaluate the evidence relating to the effectiveness of BSE in preventing death of breast cancer to make recommendations for the Canadian Task Force on Preventive Health Care.
POPULATION STUDIED: The studies’ populations included women between ages 31 and 64 years in one study and from 40 to 69 years in the other studies. These women were from multiple areas of the world, including Shanghai, Russia, the United Kingdom, Canada, the United States, and Finland.
STUDY DESIGN AND VALIDITY: his is a systematic review of articles found using an electronic database search of abstracts and reports published from 1966 to October 2000. The author identified 2 randomized controlled trials, one quasi-randomized controlled trial, 2 large cohort studies, and several case-control studies that evaluated the effects of BSE on breast cancer outcomes. The 2 randomized-controlled trials and the quasi-randomized controlled trial were large studies enrolling more than 625,000 women with at least 5 years of follow-up and confirmation that women in the BSE group actually learned how to perform the maneuver. One of the cohort studies appeared to have a significant selection bias, rendering its results difficult to interpret. Since different designs were used in the review, the studies could not be combined using meta-analysis.
OUTCOMES MEASURED: The prevention of death resulting from breast cancer was viewed as the most important outcome. Other outcomes examined included the rate of benign biopsy results, the number of patient visits for breast complaints, the stage of cancer detected, and psychological benefits and harms.
RESULTS: Of the 8 studies included in this review, 7 studies, including the 2 randomized controlled trials and the quasi-randomized controlled trial, found no difference between groups taught BSE and the control groups with regard to rates of breast cancer diagnosis, breast cancer death, or in tumor stage or size. In the 2 randomized controlled trials and the quasi-randomized controlled trial, there were higher rates of benign biopsy results in the BSE groups, approximately 1 additional biopsy for every 200 women performing BSE. Women in the BSE groups also presented to the physician’s office more frequently for breast complaints.
This systematic review shows that BSE does not improve key health outcomes for women aged 40 to 70 years, and results in unnecessary biopsies, physician visits, and worry. This does not mean that women should ignore lumps that are detected incidentally, but that BSE teaching should be excluded from periodic health examination of women in this age group. Because all but one of the studies looked exclusively at women older than 40 years and younger than 70 years, no recommendations can be made for younger or older women. When asked by patients about BSE, the best approach is to relay the facts: (1) BSE has not been shown to improve breast cancer mortality; (2) BSE increases the number of physician visits for the evaluation of benign breast lesions; and (3) BSE increases the rate of benign biopsy results.
Is a 2-day course of oral dexamethasone more effective than 5 days of oral prednisone in improving symptoms and preventing relapse in children with acute asthma?
BACKGROUND: Dexamethasone, a long-acting corticosteroid successfully used in acute treatment of croup, may prevent more relapses than prednisone in asthmatic children.
POPULATION STUDIED: The authors studied known asthmatic persons (defined by 2 or more episodes of wheezing treated with b-agonists with or without steroids) aged 2 to 18 years presenting to a children’s health hospital emergency department (ED) with an acute asthma exacerbation requiring more than 1 albuterol nebulizer treatment. Nursing staff assessed asthma severity based on either peak expired flow rates or a validated asthma severity scoring system. Children were excluded for recent oral corticosteroid treatment, history of intubation, recent varicella exposure, stridor, possible foreign body, and certain chronic diseases. During an 11-month period, 628 subjects enrolled, of whom 533 (85%) completed the study. Two thirds were men, 84% were black, and the average age was between 6 and 7 years. Fifty-six percent of the children were classified as moderate asthma severity at presentation; the remainder was evenly distributed between mild and severe.
STUDY DESIGN AND VALIDITY: This controlled trial assigned children to receive oral prednisone (2 mg/kg, maximum 60 mg, n= 261) on odd days and dexamethasone (0.6 mg/kg, maximum 16 mg, n=272) on even days. The first dose was given in the ED; the prednisone group was sent home with a prescription for 4 daily doses, the dexamethasone group was given a prepackaged dose for the following day. Children who vomited 2 doses of steroid or were directly admitted to the hospital from the ED were dropped from the study.This was a quasirandomized study, in that children were placed on one drug on even days and the other steroid on odd days. As a result, the allocation to the specific treatment groups was not concealed. Although patient severity is unlikely to have varied systematically on even and odd days, a large potential exists for a bias to be introduced into this study. Nurses who believed one treatment was superior to another could have systematically altered enrollment of children into the study based on the treatment of that day. These 2 issues—lack of randomization and concealed allocation—could invalidate the results of the study. The majority of subjects were black. Asthma prevalence, morbidity, and mortality are higher among black children, especially those in urban settings.1 There is also some evidence of physiologic predisposition in this population, namely, higher serum immunoglobulin E levels and increased airway responsiveness.2 However, no literature suggests that there is a difference in asthma treatment response between black children and children of other races or ethnicities.
OUTCOMES MEASURED: The primary outcome was rate of relapse within 10 days of discharge from the ED. Secondary outcomes were rate of hospitalization, frequency of vomiting, medication compliance, persistence of symptoms, and work or school days missed.
RESULTS: By evaluating the children who completed the study, the authors determined that the relapse rates were similar between the 2 groups, 7.4% in the dexamethasone group and 6.9% in the prednisone group (P = NS). Intention-to-treat analysis also found no difference between treatments. The number of admissions after relapse and the prevalence of persistent symptoms was also similar between the 2 groups. More children in the prednisone group missed 2 or more school days (P =.05), and more parents in this group reported not giving the medication at home (P =.004).
For acute pediatric asthma, symptom improvement and relapse rate are similar whether our patients receive 2 doses of dexamethasone or 5 doses of oral prednisone. Given equal effectiveness, fewer school days missed, less vomiting, and fewer doses, dexamethasone may be preferable. However, we hesitate to make any recommendations for changes in practice based on this study, given the severe limitations in study design.
BACKGROUND: Dexamethasone, a long-acting corticosteroid successfully used in acute treatment of croup, may prevent more relapses than prednisone in asthmatic children.
POPULATION STUDIED: The authors studied known asthmatic persons (defined by 2 or more episodes of wheezing treated with b-agonists with or without steroids) aged 2 to 18 years presenting to a children’s health hospital emergency department (ED) with an acute asthma exacerbation requiring more than 1 albuterol nebulizer treatment. Nursing staff assessed asthma severity based on either peak expired flow rates or a validated asthma severity scoring system. Children were excluded for recent oral corticosteroid treatment, history of intubation, recent varicella exposure, stridor, possible foreign body, and certain chronic diseases. During an 11-month period, 628 subjects enrolled, of whom 533 (85%) completed the study. Two thirds were men, 84% were black, and the average age was between 6 and 7 years. Fifty-six percent of the children were classified as moderate asthma severity at presentation; the remainder was evenly distributed between mild and severe.
STUDY DESIGN AND VALIDITY: This controlled trial assigned children to receive oral prednisone (2 mg/kg, maximum 60 mg, n= 261) on odd days and dexamethasone (0.6 mg/kg, maximum 16 mg, n=272) on even days. The first dose was given in the ED; the prednisone group was sent home with a prescription for 4 daily doses, the dexamethasone group was given a prepackaged dose for the following day. Children who vomited 2 doses of steroid or were directly admitted to the hospital from the ED were dropped from the study.This was a quasirandomized study, in that children were placed on one drug on even days and the other steroid on odd days. As a result, the allocation to the specific treatment groups was not concealed. Although patient severity is unlikely to have varied systematically on even and odd days, a large potential exists for a bias to be introduced into this study. Nurses who believed one treatment was superior to another could have systematically altered enrollment of children into the study based on the treatment of that day. These 2 issues—lack of randomization and concealed allocation—could invalidate the results of the study. The majority of subjects were black. Asthma prevalence, morbidity, and mortality are higher among black children, especially those in urban settings.1 There is also some evidence of physiologic predisposition in this population, namely, higher serum immunoglobulin E levels and increased airway responsiveness.2 However, no literature suggests that there is a difference in asthma treatment response between black children and children of other races or ethnicities.
OUTCOMES MEASURED: The primary outcome was rate of relapse within 10 days of discharge from the ED. Secondary outcomes were rate of hospitalization, frequency of vomiting, medication compliance, persistence of symptoms, and work or school days missed.
RESULTS: By evaluating the children who completed the study, the authors determined that the relapse rates were similar between the 2 groups, 7.4% in the dexamethasone group and 6.9% in the prednisone group (P = NS). Intention-to-treat analysis also found no difference between treatments. The number of admissions after relapse and the prevalence of persistent symptoms was also similar between the 2 groups. More children in the prednisone group missed 2 or more school days (P =.05), and more parents in this group reported not giving the medication at home (P =.004).
For acute pediatric asthma, symptom improvement and relapse rate are similar whether our patients receive 2 doses of dexamethasone or 5 doses of oral prednisone. Given equal effectiveness, fewer school days missed, less vomiting, and fewer doses, dexamethasone may be preferable. However, we hesitate to make any recommendations for changes in practice based on this study, given the severe limitations in study design.
BACKGROUND: Dexamethasone, a long-acting corticosteroid successfully used in acute treatment of croup, may prevent more relapses than prednisone in asthmatic children.
POPULATION STUDIED: The authors studied known asthmatic persons (defined by 2 or more episodes of wheezing treated with b-agonists with or without steroids) aged 2 to 18 years presenting to a children’s health hospital emergency department (ED) with an acute asthma exacerbation requiring more than 1 albuterol nebulizer treatment. Nursing staff assessed asthma severity based on either peak expired flow rates or a validated asthma severity scoring system. Children were excluded for recent oral corticosteroid treatment, history of intubation, recent varicella exposure, stridor, possible foreign body, and certain chronic diseases. During an 11-month period, 628 subjects enrolled, of whom 533 (85%) completed the study. Two thirds were men, 84% were black, and the average age was between 6 and 7 years. Fifty-six percent of the children were classified as moderate asthma severity at presentation; the remainder was evenly distributed between mild and severe.
STUDY DESIGN AND VALIDITY: This controlled trial assigned children to receive oral prednisone (2 mg/kg, maximum 60 mg, n= 261) on odd days and dexamethasone (0.6 mg/kg, maximum 16 mg, n=272) on even days. The first dose was given in the ED; the prednisone group was sent home with a prescription for 4 daily doses, the dexamethasone group was given a prepackaged dose for the following day. Children who vomited 2 doses of steroid or were directly admitted to the hospital from the ED were dropped from the study.This was a quasirandomized study, in that children were placed on one drug on even days and the other steroid on odd days. As a result, the allocation to the specific treatment groups was not concealed. Although patient severity is unlikely to have varied systematically on even and odd days, a large potential exists for a bias to be introduced into this study. Nurses who believed one treatment was superior to another could have systematically altered enrollment of children into the study based on the treatment of that day. These 2 issues—lack of randomization and concealed allocation—could invalidate the results of the study. The majority of subjects were black. Asthma prevalence, morbidity, and mortality are higher among black children, especially those in urban settings.1 There is also some evidence of physiologic predisposition in this population, namely, higher serum immunoglobulin E levels and increased airway responsiveness.2 However, no literature suggests that there is a difference in asthma treatment response between black children and children of other races or ethnicities.
OUTCOMES MEASURED: The primary outcome was rate of relapse within 10 days of discharge from the ED. Secondary outcomes were rate of hospitalization, frequency of vomiting, medication compliance, persistence of symptoms, and work or school days missed.
RESULTS: By evaluating the children who completed the study, the authors determined that the relapse rates were similar between the 2 groups, 7.4% in the dexamethasone group and 6.9% in the prednisone group (P = NS). Intention-to-treat analysis also found no difference between treatments. The number of admissions after relapse and the prevalence of persistent symptoms was also similar between the 2 groups. More children in the prednisone group missed 2 or more school days (P =.05), and more parents in this group reported not giving the medication at home (P =.004).
For acute pediatric asthma, symptom improvement and relapse rate are similar whether our patients receive 2 doses of dexamethasone or 5 doses of oral prednisone. Given equal effectiveness, fewer school days missed, less vomiting, and fewer doses, dexamethasone may be preferable. However, we hesitate to make any recommendations for changes in practice based on this study, given the severe limitations in study design.
Does a change to long-acting antianginals provide better symptom control, treatment satisfaction, and quality of life?
BACKGROUND: Long-acting anti-angina agents have theoretical advantages over short-acting options. They offer more consistent blood levels, which could result in fewer side effects, as well as more sustained control of symptoms. This randomized controlled trial addressed the effectiveness of a strategy of converting to long-acting antianginal medication.
POPULATION STUDIED: The investigators enrolled 100 male outpatients at a Veterans Administration (VA) Health Systems Clinic who had known coronary disease or angina and who were taking at least 2 antianginal drugs. They were identified by clinician or a pharmacy database. Patients were excluded from the study if they had a recent hospitalization, aortic stenosis, an ejection fraction of less than 40%, significant conduction defects, or limited life expectancy. Most of the men (81%) were white and the average age was 65 years. The men had several concomitant diseases: 39% had diabetes, 28% congestive heart failure, and 68% had a prior myocardial infarction. These patients seem similar to the sicker patients in a typical family practice, but caution should be exercised in generalizing the results to women and to settings different from the VA.
STUDY DESIGN AND VALIDITY: The study was a single-blind, prospective randomized trial using concealed allocation. A nurse practitioner evaluated all patients weekly for 4 weeks and then monthly for 2 months. In the “once a day” group, all previous antianginal medications were withdrawn and then restarted one at a time in this order: long-acting diltiazem (up to 360 mg/day), nitroglycerin patches (up to 0.8 mg/hour) and atenolol (up to 100 mg/day). Doses were maximized prior to the addition of the next drug. In contrast, no drugs were stopped in “usual care”; instead, baseline medications were increased on the basis of symptoms to maximum doses following alphabetical order. For both groups, an algorithm was used to adjust medications. Data were analyzed according to intention-to-treat using t tests, with logistic regression to control for baseline frequency of angina. The methodology of this study was pragmatic and appropriate. Strengths included randomization, concealed allocation and complete follow-up. A major limitation of the design was that the intervention was complex, involving frequent visits along with a specific care algorithm, complete withdrawal of medication at the beginning and a specified order of medication re-introduction. The consequence is that it is difficult to determine which component of the intervention led to the results. Other important limitations include short duration (3 months); inattention to important confounding variables such as diabetes, congestive heart failure, or which specific drugs the patients were taking; lack of statistical power to assess confounding; and lack of attention to multiple comparisons.
OUTCOMES MEASURED: Primary outcomes were functional status, treatment satisfaction, and quality of life, measured by the Seattle Angina Questionnaire, a validated disease-specific measure for patients with coronary artery disease. The study did not address cost, side-effects, morbidity/mortality, or any long-term outcomes.
RESULTS: At the end of the trial, the once-a-day group had fewer symptoms than the usual care group (difference of averages: 12.3 points, P <.002; 5-8 points is considered clinically significant). Controlling for baseline frequency of angina did not change this result. There was no significant difference between groups in treatment satisfaction or quality of life. By the end of the trial, the patients in the once-a-day group were taking fewer medications (average 1.55 vs 2.14, P <.001).
This report provides good evidence that a strategy of withdrawing antianginal medications and replacing them with once-a-day medications reduces overall medications and provides improved symptom relief. Which medications to use, how to adjust them, and in which settings remains unclear, although these data suggest that switching to once-a-day medications is itself beneficial. Clinicians should not necessarily be locked into a traditional strategy of gradually increasing medications for patients with angina; rather, this trial shows that very different approaches, such as trials of withdrawal and substitution of different classes of medications, can be effective.
BACKGROUND: Long-acting anti-angina agents have theoretical advantages over short-acting options. They offer more consistent blood levels, which could result in fewer side effects, as well as more sustained control of symptoms. This randomized controlled trial addressed the effectiveness of a strategy of converting to long-acting antianginal medication.
POPULATION STUDIED: The investigators enrolled 100 male outpatients at a Veterans Administration (VA) Health Systems Clinic who had known coronary disease or angina and who were taking at least 2 antianginal drugs. They were identified by clinician or a pharmacy database. Patients were excluded from the study if they had a recent hospitalization, aortic stenosis, an ejection fraction of less than 40%, significant conduction defects, or limited life expectancy. Most of the men (81%) were white and the average age was 65 years. The men had several concomitant diseases: 39% had diabetes, 28% congestive heart failure, and 68% had a prior myocardial infarction. These patients seem similar to the sicker patients in a typical family practice, but caution should be exercised in generalizing the results to women and to settings different from the VA.
STUDY DESIGN AND VALIDITY: The study was a single-blind, prospective randomized trial using concealed allocation. A nurse practitioner evaluated all patients weekly for 4 weeks and then monthly for 2 months. In the “once a day” group, all previous antianginal medications were withdrawn and then restarted one at a time in this order: long-acting diltiazem (up to 360 mg/day), nitroglycerin patches (up to 0.8 mg/hour) and atenolol (up to 100 mg/day). Doses were maximized prior to the addition of the next drug. In contrast, no drugs were stopped in “usual care”; instead, baseline medications were increased on the basis of symptoms to maximum doses following alphabetical order. For both groups, an algorithm was used to adjust medications. Data were analyzed according to intention-to-treat using t tests, with logistic regression to control for baseline frequency of angina. The methodology of this study was pragmatic and appropriate. Strengths included randomization, concealed allocation and complete follow-up. A major limitation of the design was that the intervention was complex, involving frequent visits along with a specific care algorithm, complete withdrawal of medication at the beginning and a specified order of medication re-introduction. The consequence is that it is difficult to determine which component of the intervention led to the results. Other important limitations include short duration (3 months); inattention to important confounding variables such as diabetes, congestive heart failure, or which specific drugs the patients were taking; lack of statistical power to assess confounding; and lack of attention to multiple comparisons.
OUTCOMES MEASURED: Primary outcomes were functional status, treatment satisfaction, and quality of life, measured by the Seattle Angina Questionnaire, a validated disease-specific measure for patients with coronary artery disease. The study did not address cost, side-effects, morbidity/mortality, or any long-term outcomes.
RESULTS: At the end of the trial, the once-a-day group had fewer symptoms than the usual care group (difference of averages: 12.3 points, P <.002; 5-8 points is considered clinically significant). Controlling for baseline frequency of angina did not change this result. There was no significant difference between groups in treatment satisfaction or quality of life. By the end of the trial, the patients in the once-a-day group were taking fewer medications (average 1.55 vs 2.14, P <.001).
This report provides good evidence that a strategy of withdrawing antianginal medications and replacing them with once-a-day medications reduces overall medications and provides improved symptom relief. Which medications to use, how to adjust them, and in which settings remains unclear, although these data suggest that switching to once-a-day medications is itself beneficial. Clinicians should not necessarily be locked into a traditional strategy of gradually increasing medications for patients with angina; rather, this trial shows that very different approaches, such as trials of withdrawal and substitution of different classes of medications, can be effective.
BACKGROUND: Long-acting anti-angina agents have theoretical advantages over short-acting options. They offer more consistent blood levels, which could result in fewer side effects, as well as more sustained control of symptoms. This randomized controlled trial addressed the effectiveness of a strategy of converting to long-acting antianginal medication.
POPULATION STUDIED: The investigators enrolled 100 male outpatients at a Veterans Administration (VA) Health Systems Clinic who had known coronary disease or angina and who were taking at least 2 antianginal drugs. They were identified by clinician or a pharmacy database. Patients were excluded from the study if they had a recent hospitalization, aortic stenosis, an ejection fraction of less than 40%, significant conduction defects, or limited life expectancy. Most of the men (81%) were white and the average age was 65 years. The men had several concomitant diseases: 39% had diabetes, 28% congestive heart failure, and 68% had a prior myocardial infarction. These patients seem similar to the sicker patients in a typical family practice, but caution should be exercised in generalizing the results to women and to settings different from the VA.
STUDY DESIGN AND VALIDITY: The study was a single-blind, prospective randomized trial using concealed allocation. A nurse practitioner evaluated all patients weekly for 4 weeks and then monthly for 2 months. In the “once a day” group, all previous antianginal medications were withdrawn and then restarted one at a time in this order: long-acting diltiazem (up to 360 mg/day), nitroglycerin patches (up to 0.8 mg/hour) and atenolol (up to 100 mg/day). Doses were maximized prior to the addition of the next drug. In contrast, no drugs were stopped in “usual care”; instead, baseline medications were increased on the basis of symptoms to maximum doses following alphabetical order. For both groups, an algorithm was used to adjust medications. Data were analyzed according to intention-to-treat using t tests, with logistic regression to control for baseline frequency of angina. The methodology of this study was pragmatic and appropriate. Strengths included randomization, concealed allocation and complete follow-up. A major limitation of the design was that the intervention was complex, involving frequent visits along with a specific care algorithm, complete withdrawal of medication at the beginning and a specified order of medication re-introduction. The consequence is that it is difficult to determine which component of the intervention led to the results. Other important limitations include short duration (3 months); inattention to important confounding variables such as diabetes, congestive heart failure, or which specific drugs the patients were taking; lack of statistical power to assess confounding; and lack of attention to multiple comparisons.
OUTCOMES MEASURED: Primary outcomes were functional status, treatment satisfaction, and quality of life, measured by the Seattle Angina Questionnaire, a validated disease-specific measure for patients with coronary artery disease. The study did not address cost, side-effects, morbidity/mortality, or any long-term outcomes.
RESULTS: At the end of the trial, the once-a-day group had fewer symptoms than the usual care group (difference of averages: 12.3 points, P <.002; 5-8 points is considered clinically significant). Controlling for baseline frequency of angina did not change this result. There was no significant difference between groups in treatment satisfaction or quality of life. By the end of the trial, the patients in the once-a-day group were taking fewer medications (average 1.55 vs 2.14, P <.001).
This report provides good evidence that a strategy of withdrawing antianginal medications and replacing them with once-a-day medications reduces overall medications and provides improved symptom relief. Which medications to use, how to adjust them, and in which settings remains unclear, although these data suggest that switching to once-a-day medications is itself beneficial. Clinicians should not necessarily be locked into a traditional strategy of gradually increasing medications for patients with angina; rather, this trial shows that very different approaches, such as trials of withdrawal and substitution of different classes of medications, can be effective.