User login
How do calcium channel blockers compare with beta-blockers, diuretics, and angiotensin-converting enzyme inhibitors for hypertension?
ABSTRACT
BACKGROUND: Calcium channel blockers are used extensively in the treatment of hypertension. The authors systematically reviewed recent large, long-term trials that compared calcium channel blockers with beta-blockers or diuretics. A secondary analysis compared calcium channel blockers with ACE inhibitors in hypertensive patients with diabetes.
POPULATION STUDIED: The patients in this meta-analysis were pooled from 3 large European, multicenter studies (n = 21,611), that compared calcium channel blockers with diuretics or beta-blockers in elderly men and women with hypertension. A separate analysis included 3 smaller studies, bringing the total patients to 24,322. Most of these patients did not have active cardiovascular disease, including coronary artery disease and left ventricular hypertrophy; approximately 25% smoked; and approximately 50% had hypercholesterolemia. Only 1318 were included in a separate analysis of calcium channel blockers and ACE inhibitors in patients with hypertension and diabetes.
STUDY DESIGN AND VALIDITY: This was a meta-analysis of several randomized, controlled studies, which were double-blinded or assessed by a committee blinded to treatment assignment. Patients were followed for at least 2 years. The studies evaluated patients for major cardiovascular events, including myocardial infarction (MI), stroke, heart failure, and death. In the 3 major trials, target blood pressures were < 140/90 mm Hg, < 160/95 mm Hg, and < 90 mm Hg diastolic, respectively.
OUTCOMES MEASURED: The outcomes measured were fatal and nonfatal MI and stroke, development of congestive heart failure, and cardiovascular and total mortality.
RESULTS: Calcium channel blockers were associated with fewer nonfatal strokes than diuretics or beta-blockers (relative risk [RR]=0.751; 95% confidence interval [CI], 0.653-.864; absolute risk reduction [ARR]=0.9%; number needed to treat [NNT]=111). Fatal stroke rates were not different between the 2 groups (RR=0.918; 95% CI, 0.779-1.083). Also, there were fewer total strokes with calcium channel blockers (RR=0.869; 95% CI, 0.769-0.982; ARR=0.6%; NNT=167). Calcium channel blockers were associated with more nonfatal myocardial infarctions (RR=1.177; 95% CI, 1.011-1.370; absolute risk increase [ARI]=0.5%; number needed to harm [NNH]=200) and total myocardial infarctions (RR=1.182; 95% CI, 1.036-1.349; ARI=0.6%; NNH=167) compared with betablockers or diuretics. Rates of congestive heart failure, cardiovascular mortality, and total mortality were not different between the 2 groups.
Calcium channel blockers are associated with slightly fewer strokes and slightly more myocardial infarctions compared with beta-blockers or diuretics. No significant differences in total or cardiovascular mortality between the classes of medications were noted in this meta-analysis. These data support the notion that calcium channel blockers are as safe as, but no more effective than, conventional treatments for hypertension. In diabetic patients, an angiotensin-converting enzyme (ACE) inhibitor should be used before a calcium channel blocker. The Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) compared the calcium channel blocker amlodipine and the ACE inhibitor lisinopril with the diuretic chlorthalidone in 30,000 elderly patients with hypertension and 10,000 with comorbid diabetes. Results of ALLHAT should be available by fall 2002. Meanwhile, primarily because of high costs, calcium channel blockers should remain fourth-line agents in the treatment of hypertension, after diuretics, beta-blockers, and in diabetic patients particularly, ACE inhibitors.
ABSTRACT
BACKGROUND: Calcium channel blockers are used extensively in the treatment of hypertension. The authors systematically reviewed recent large, long-term trials that compared calcium channel blockers with beta-blockers or diuretics. A secondary analysis compared calcium channel blockers with ACE inhibitors in hypertensive patients with diabetes.
POPULATION STUDIED: The patients in this meta-analysis were pooled from 3 large European, multicenter studies (n = 21,611), that compared calcium channel blockers with diuretics or beta-blockers in elderly men and women with hypertension. A separate analysis included 3 smaller studies, bringing the total patients to 24,322. Most of these patients did not have active cardiovascular disease, including coronary artery disease and left ventricular hypertrophy; approximately 25% smoked; and approximately 50% had hypercholesterolemia. Only 1318 were included in a separate analysis of calcium channel blockers and ACE inhibitors in patients with hypertension and diabetes.
STUDY DESIGN AND VALIDITY: This was a meta-analysis of several randomized, controlled studies, which were double-blinded or assessed by a committee blinded to treatment assignment. Patients were followed for at least 2 years. The studies evaluated patients for major cardiovascular events, including myocardial infarction (MI), stroke, heart failure, and death. In the 3 major trials, target blood pressures were < 140/90 mm Hg, < 160/95 mm Hg, and < 90 mm Hg diastolic, respectively.
OUTCOMES MEASURED: The outcomes measured were fatal and nonfatal MI and stroke, development of congestive heart failure, and cardiovascular and total mortality.
RESULTS: Calcium channel blockers were associated with fewer nonfatal strokes than diuretics or beta-blockers (relative risk [RR]=0.751; 95% confidence interval [CI], 0.653-.864; absolute risk reduction [ARR]=0.9%; number needed to treat [NNT]=111). Fatal stroke rates were not different between the 2 groups (RR=0.918; 95% CI, 0.779-1.083). Also, there were fewer total strokes with calcium channel blockers (RR=0.869; 95% CI, 0.769-0.982; ARR=0.6%; NNT=167). Calcium channel blockers were associated with more nonfatal myocardial infarctions (RR=1.177; 95% CI, 1.011-1.370; absolute risk increase [ARI]=0.5%; number needed to harm [NNH]=200) and total myocardial infarctions (RR=1.182; 95% CI, 1.036-1.349; ARI=0.6%; NNH=167) compared with betablockers or diuretics. Rates of congestive heart failure, cardiovascular mortality, and total mortality were not different between the 2 groups.
Calcium channel blockers are associated with slightly fewer strokes and slightly more myocardial infarctions compared with beta-blockers or diuretics. No significant differences in total or cardiovascular mortality between the classes of medications were noted in this meta-analysis. These data support the notion that calcium channel blockers are as safe as, but no more effective than, conventional treatments for hypertension. In diabetic patients, an angiotensin-converting enzyme (ACE) inhibitor should be used before a calcium channel blocker. The Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) compared the calcium channel blocker amlodipine and the ACE inhibitor lisinopril with the diuretic chlorthalidone in 30,000 elderly patients with hypertension and 10,000 with comorbid diabetes. Results of ALLHAT should be available by fall 2002. Meanwhile, primarily because of high costs, calcium channel blockers should remain fourth-line agents in the treatment of hypertension, after diuretics, beta-blockers, and in diabetic patients particularly, ACE inhibitors.
ABSTRACT
BACKGROUND: Calcium channel blockers are used extensively in the treatment of hypertension. The authors systematically reviewed recent large, long-term trials that compared calcium channel blockers with beta-blockers or diuretics. A secondary analysis compared calcium channel blockers with ACE inhibitors in hypertensive patients with diabetes.
POPULATION STUDIED: The patients in this meta-analysis were pooled from 3 large European, multicenter studies (n = 21,611), that compared calcium channel blockers with diuretics or beta-blockers in elderly men and women with hypertension. A separate analysis included 3 smaller studies, bringing the total patients to 24,322. Most of these patients did not have active cardiovascular disease, including coronary artery disease and left ventricular hypertrophy; approximately 25% smoked; and approximately 50% had hypercholesterolemia. Only 1318 were included in a separate analysis of calcium channel blockers and ACE inhibitors in patients with hypertension and diabetes.
STUDY DESIGN AND VALIDITY: This was a meta-analysis of several randomized, controlled studies, which were double-blinded or assessed by a committee blinded to treatment assignment. Patients were followed for at least 2 years. The studies evaluated patients for major cardiovascular events, including myocardial infarction (MI), stroke, heart failure, and death. In the 3 major trials, target blood pressures were < 140/90 mm Hg, < 160/95 mm Hg, and < 90 mm Hg diastolic, respectively.
OUTCOMES MEASURED: The outcomes measured were fatal and nonfatal MI and stroke, development of congestive heart failure, and cardiovascular and total mortality.
RESULTS: Calcium channel blockers were associated with fewer nonfatal strokes than diuretics or beta-blockers (relative risk [RR]=0.751; 95% confidence interval [CI], 0.653-.864; absolute risk reduction [ARR]=0.9%; number needed to treat [NNT]=111). Fatal stroke rates were not different between the 2 groups (RR=0.918; 95% CI, 0.779-1.083). Also, there were fewer total strokes with calcium channel blockers (RR=0.869; 95% CI, 0.769-0.982; ARR=0.6%; NNT=167). Calcium channel blockers were associated with more nonfatal myocardial infarctions (RR=1.177; 95% CI, 1.011-1.370; absolute risk increase [ARI]=0.5%; number needed to harm [NNH]=200) and total myocardial infarctions (RR=1.182; 95% CI, 1.036-1.349; ARI=0.6%; NNH=167) compared with betablockers or diuretics. Rates of congestive heart failure, cardiovascular mortality, and total mortality were not different between the 2 groups.
Calcium channel blockers are associated with slightly fewer strokes and slightly more myocardial infarctions compared with beta-blockers or diuretics. No significant differences in total or cardiovascular mortality between the classes of medications were noted in this meta-analysis. These data support the notion that calcium channel blockers are as safe as, but no more effective than, conventional treatments for hypertension. In diabetic patients, an angiotensin-converting enzyme (ACE) inhibitor should be used before a calcium channel blocker. The Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) compared the calcium channel blocker amlodipine and the ACE inhibitor lisinopril with the diuretic chlorthalidone in 30,000 elderly patients with hypertension and 10,000 with comorbid diabetes. Results of ALLHAT should be available by fall 2002. Meanwhile, primarily because of high costs, calcium channel blockers should remain fourth-line agents in the treatment of hypertension, after diuretics, beta-blockers, and in diabetic patients particularly, ACE inhibitors.
Does raloxifene affect risk of cardiovascular events in osteoporotic postmenopausal women?
ABSTRACT
BACKGROUND: Physicians seek new therapies to reduce the risk of fractures in osteoporotic post-menopausal women without increasing the risk of cardiovascular events or cancers. Raloxifene (Evista), a selective estrogen receptor modulator, may be a new choice for therapy. Using original data from the Multiple Outcomes of Raloxifene Evaluation, the authors present a secondary analysis of this randomized trial to evaluate its effect on cardiovascular events.
POPULATION STUDIED: The researchers enrolled 7705 women who were at least 2 years post-menopausal, from outpatient and community settings at 180 sites in 25 countries. Average age was similar by treatment group (overall mean = 67 years). All patients had osteoporosis documented by either prior vertebral fracture or a bone mineral density T score of less than -2.5. Study women were predominantly white (95%). Baseline characteristics of women were similar for most cardiovascular risk factors and concomitant cardiovascular medications, although women receiving raloxifene were significantly more likely to have diabetes.
STUDY DESIGN AND VALIDITY: The study was a double-blind randomized, controlled trial with concealed allocation assignment. It originally was designed to determine the effect of raloxifene on bone mineral density and vertebral fractures. Women were randomized to receive placebo, or 60 mg or 120 mg of raloxifene per day. This study was a secondary analysis of the data for cardiovascular outcomes. Risk scores, based on evidence of established coronary heart disease or the presence of cardiovascular risk factors, were assigned to a subgroup of women with increased cardiovascular risk.
OUTCOMES MEASURED: The authors collected cardiovascular event outcome data by asking women at each visit whether they had experienced a myocardial infarction, coronary artery bypass graft, percutaneous coronary intervention, or stroke since the previous visit. Unsolicited reports of cardiovascular events were also recorded.
RESULTS: The follow-up data were 90.8% complete for women who took placebo and 89.6% for those who took raloxifene at 1 year. There was no significant difference between combined treatment and placebo groups in the number of women with cardiovascular events during the first year of the trial. Nor was there a difference in the high-risk subset. The serious loss to follow-up for the 4 years of the study (25% of placebo and 22% of raloxifene women) makes the analysis unreliable for longer than the first year of study. We can use intention-to-treat analysis to assess the potential effect of missing cardiovascular events in those lost to follow-up.1 The resulting relative risk ranges from 0.11, for the extreme assumption that all missing women taking placebo suffered a cardiovascular event while those on raloxifene did not, to 6.89, for the opposite extreme that all missing women taking raloxifene suffered a cardiovascular event while those on placebo did not. The true relative risk lies somewhere between these boundaries. With so much data missing, we are unable to assess raloxifene’s effect on cardiovascular events in postmenopausal osteoporotic women in the longer term.
After 1 year of therapy, raloxifene did not increase the risk of cardiovascular events in older postmenopausal women with osteoporosis. Its effect on cardiovascular risk has not been assessed in women taking it for more than 1 year. Absence of a detrimental cardiovascular effect is a benefit, compared with estrogen replacement therapy, although both approaches prevent osteoporotic fractures. However, both of these hormonal approaches carry the same risk for thromboembolism. Raloxifene may cause or worsen hot flushes, whereas estrogen prevents them. Long-term compliance with either therapy is not good. Given the cost and risks of the biphosphonates, the optimal approach to osteoporosis prevention and treatment is a difficult clinical decision.
ABSTRACT
BACKGROUND: Physicians seek new therapies to reduce the risk of fractures in osteoporotic post-menopausal women without increasing the risk of cardiovascular events or cancers. Raloxifene (Evista), a selective estrogen receptor modulator, may be a new choice for therapy. Using original data from the Multiple Outcomes of Raloxifene Evaluation, the authors present a secondary analysis of this randomized trial to evaluate its effect on cardiovascular events.
POPULATION STUDIED: The researchers enrolled 7705 women who were at least 2 years post-menopausal, from outpatient and community settings at 180 sites in 25 countries. Average age was similar by treatment group (overall mean = 67 years). All patients had osteoporosis documented by either prior vertebral fracture or a bone mineral density T score of less than -2.5. Study women were predominantly white (95%). Baseline characteristics of women were similar for most cardiovascular risk factors and concomitant cardiovascular medications, although women receiving raloxifene were significantly more likely to have diabetes.
STUDY DESIGN AND VALIDITY: The study was a double-blind randomized, controlled trial with concealed allocation assignment. It originally was designed to determine the effect of raloxifene on bone mineral density and vertebral fractures. Women were randomized to receive placebo, or 60 mg or 120 mg of raloxifene per day. This study was a secondary analysis of the data for cardiovascular outcomes. Risk scores, based on evidence of established coronary heart disease or the presence of cardiovascular risk factors, were assigned to a subgroup of women with increased cardiovascular risk.
OUTCOMES MEASURED: The authors collected cardiovascular event outcome data by asking women at each visit whether they had experienced a myocardial infarction, coronary artery bypass graft, percutaneous coronary intervention, or stroke since the previous visit. Unsolicited reports of cardiovascular events were also recorded.
RESULTS: The follow-up data were 90.8% complete for women who took placebo and 89.6% for those who took raloxifene at 1 year. There was no significant difference between combined treatment and placebo groups in the number of women with cardiovascular events during the first year of the trial. Nor was there a difference in the high-risk subset. The serious loss to follow-up for the 4 years of the study (25% of placebo and 22% of raloxifene women) makes the analysis unreliable for longer than the first year of study. We can use intention-to-treat analysis to assess the potential effect of missing cardiovascular events in those lost to follow-up.1 The resulting relative risk ranges from 0.11, for the extreme assumption that all missing women taking placebo suffered a cardiovascular event while those on raloxifene did not, to 6.89, for the opposite extreme that all missing women taking raloxifene suffered a cardiovascular event while those on placebo did not. The true relative risk lies somewhere between these boundaries. With so much data missing, we are unable to assess raloxifene’s effect on cardiovascular events in postmenopausal osteoporotic women in the longer term.
After 1 year of therapy, raloxifene did not increase the risk of cardiovascular events in older postmenopausal women with osteoporosis. Its effect on cardiovascular risk has not been assessed in women taking it for more than 1 year. Absence of a detrimental cardiovascular effect is a benefit, compared with estrogen replacement therapy, although both approaches prevent osteoporotic fractures. However, both of these hormonal approaches carry the same risk for thromboembolism. Raloxifene may cause or worsen hot flushes, whereas estrogen prevents them. Long-term compliance with either therapy is not good. Given the cost and risks of the biphosphonates, the optimal approach to osteoporosis prevention and treatment is a difficult clinical decision.
ABSTRACT
BACKGROUND: Physicians seek new therapies to reduce the risk of fractures in osteoporotic post-menopausal women without increasing the risk of cardiovascular events or cancers. Raloxifene (Evista), a selective estrogen receptor modulator, may be a new choice for therapy. Using original data from the Multiple Outcomes of Raloxifene Evaluation, the authors present a secondary analysis of this randomized trial to evaluate its effect on cardiovascular events.
POPULATION STUDIED: The researchers enrolled 7705 women who were at least 2 years post-menopausal, from outpatient and community settings at 180 sites in 25 countries. Average age was similar by treatment group (overall mean = 67 years). All patients had osteoporosis documented by either prior vertebral fracture or a bone mineral density T score of less than -2.5. Study women were predominantly white (95%). Baseline characteristics of women were similar for most cardiovascular risk factors and concomitant cardiovascular medications, although women receiving raloxifene were significantly more likely to have diabetes.
STUDY DESIGN AND VALIDITY: The study was a double-blind randomized, controlled trial with concealed allocation assignment. It originally was designed to determine the effect of raloxifene on bone mineral density and vertebral fractures. Women were randomized to receive placebo, or 60 mg or 120 mg of raloxifene per day. This study was a secondary analysis of the data for cardiovascular outcomes. Risk scores, based on evidence of established coronary heart disease or the presence of cardiovascular risk factors, were assigned to a subgroup of women with increased cardiovascular risk.
OUTCOMES MEASURED: The authors collected cardiovascular event outcome data by asking women at each visit whether they had experienced a myocardial infarction, coronary artery bypass graft, percutaneous coronary intervention, or stroke since the previous visit. Unsolicited reports of cardiovascular events were also recorded.
RESULTS: The follow-up data were 90.8% complete for women who took placebo and 89.6% for those who took raloxifene at 1 year. There was no significant difference between combined treatment and placebo groups in the number of women with cardiovascular events during the first year of the trial. Nor was there a difference in the high-risk subset. The serious loss to follow-up for the 4 years of the study (25% of placebo and 22% of raloxifene women) makes the analysis unreliable for longer than the first year of study. We can use intention-to-treat analysis to assess the potential effect of missing cardiovascular events in those lost to follow-up.1 The resulting relative risk ranges from 0.11, for the extreme assumption that all missing women taking placebo suffered a cardiovascular event while those on raloxifene did not, to 6.89, for the opposite extreme that all missing women taking raloxifene suffered a cardiovascular event while those on placebo did not. The true relative risk lies somewhere between these boundaries. With so much data missing, we are unable to assess raloxifene’s effect on cardiovascular events in postmenopausal osteoporotic women in the longer term.
After 1 year of therapy, raloxifene did not increase the risk of cardiovascular events in older postmenopausal women with osteoporosis. Its effect on cardiovascular risk has not been assessed in women taking it for more than 1 year. Absence of a detrimental cardiovascular effect is a benefit, compared with estrogen replacement therapy, although both approaches prevent osteoporotic fractures. However, both of these hormonal approaches carry the same risk for thromboembolism. Raloxifene may cause or worsen hot flushes, whereas estrogen prevents them. Long-term compliance with either therapy is not good. Given the cost and risks of the biphosphonates, the optimal approach to osteoporosis prevention and treatment is a difficult clinical decision.
Do disease-specific mortality effects correlate with all-cause mortality effects in cancer screening trials?
ABSTRACT
BACKGROUND: Cancer screening trials have traditionally focused on disease-specific mortality, the number of subjects whose death is attributed to the screened disease. This end point is generally easier to study than all-cause mortality (the overall death rate), because fewer subjects are needed to achieve a statistically significant result. However, this approach has many potential biases, and it neglects the possibility that screening may lead to potentially fatal complications. The authors of this study compared disease-specific mortality changes to all-cause mortality changes in a collection of cancer screening trials.
POPULATION STUDIED: This study examined 12 published randomized trials of cancer screening. Of 16 initial trials identified, the 12 chosen for study were those in which disease-specific and all-cause mortality could be determined. The 12 chosen studies included 7 of mammography, 3 of fecal occult blood testing, and 2 of chest x-rays for lung cancer.
STUDY DESIGN AND VALIDITY: The researchers used a list published in a text on cancer screening to identify randomized trials for inclusion in this study. Updated information from each of the trials was obtained by performing a PubMed search of authors’ names and other relevant terms. This was not an exhaustive, systematic review of the literature. A more extensive literature search would have used multiple databases, evidence-based search methods, and possibly unpublished data. Very little information is given on the search terms used in PubMed. However, since this was a comparison of different outcome measures rather than a meta-analysis, a systematic review is not necessarily required.
OUTCOMES MEASURED: For each study, the difference in mortality between screened and unscreened (control) groups was reported as the screening benefit. The screening benefits from both disease-specific mortality and all-cause mortality data were then compared in terms of number of deaths per 10,000 person-years of observation.
RESULTS: One would expect that if a screening program decreased mortality related to the disease, overall mortality would be less as well. The authors found that this correlation did not occur in most of these studies. Five of the studies found that disease-related mortality and overall mortality went in different directions. Three of these 5 studies reported a statistically significant benefit in disease-specific mortality, but the all-cause mortality was either not affected or was worse. Two trials showed no benefit in disease-specific mortality but a trend in a positive or negative direction in all-cause mortality.
Although disease-specific mortality has been the standard for reporting mortality benefit in cancer screening, it does not necessarily correlate with significant benefits in all-cause mortality. In other words, some cancer screening may decrease deaths due to the screened disease, but patients still die at the same (or even higher) rate despite the screening. Inconsistent results are evident in trials studying mammography screening for breast cancer, fecal occult blood testing for colon cancer, and chest x-ray screening for lung cancer. When deciding whether a screening intervention is potentially beneficial, we may be misled by reports of disease-specific mortality.
ABSTRACT
BACKGROUND: Cancer screening trials have traditionally focused on disease-specific mortality, the number of subjects whose death is attributed to the screened disease. This end point is generally easier to study than all-cause mortality (the overall death rate), because fewer subjects are needed to achieve a statistically significant result. However, this approach has many potential biases, and it neglects the possibility that screening may lead to potentially fatal complications. The authors of this study compared disease-specific mortality changes to all-cause mortality changes in a collection of cancer screening trials.
POPULATION STUDIED: This study examined 12 published randomized trials of cancer screening. Of 16 initial trials identified, the 12 chosen for study were those in which disease-specific and all-cause mortality could be determined. The 12 chosen studies included 7 of mammography, 3 of fecal occult blood testing, and 2 of chest x-rays for lung cancer.
STUDY DESIGN AND VALIDITY: The researchers used a list published in a text on cancer screening to identify randomized trials for inclusion in this study. Updated information from each of the trials was obtained by performing a PubMed search of authors’ names and other relevant terms. This was not an exhaustive, systematic review of the literature. A more extensive literature search would have used multiple databases, evidence-based search methods, and possibly unpublished data. Very little information is given on the search terms used in PubMed. However, since this was a comparison of different outcome measures rather than a meta-analysis, a systematic review is not necessarily required.
OUTCOMES MEASURED: For each study, the difference in mortality between screened and unscreened (control) groups was reported as the screening benefit. The screening benefits from both disease-specific mortality and all-cause mortality data were then compared in terms of number of deaths per 10,000 person-years of observation.
RESULTS: One would expect that if a screening program decreased mortality related to the disease, overall mortality would be less as well. The authors found that this correlation did not occur in most of these studies. Five of the studies found that disease-related mortality and overall mortality went in different directions. Three of these 5 studies reported a statistically significant benefit in disease-specific mortality, but the all-cause mortality was either not affected or was worse. Two trials showed no benefit in disease-specific mortality but a trend in a positive or negative direction in all-cause mortality.
Although disease-specific mortality has been the standard for reporting mortality benefit in cancer screening, it does not necessarily correlate with significant benefits in all-cause mortality. In other words, some cancer screening may decrease deaths due to the screened disease, but patients still die at the same (or even higher) rate despite the screening. Inconsistent results are evident in trials studying mammography screening for breast cancer, fecal occult blood testing for colon cancer, and chest x-ray screening for lung cancer. When deciding whether a screening intervention is potentially beneficial, we may be misled by reports of disease-specific mortality.
ABSTRACT
BACKGROUND: Cancer screening trials have traditionally focused on disease-specific mortality, the number of subjects whose death is attributed to the screened disease. This end point is generally easier to study than all-cause mortality (the overall death rate), because fewer subjects are needed to achieve a statistically significant result. However, this approach has many potential biases, and it neglects the possibility that screening may lead to potentially fatal complications. The authors of this study compared disease-specific mortality changes to all-cause mortality changes in a collection of cancer screening trials.
POPULATION STUDIED: This study examined 12 published randomized trials of cancer screening. Of 16 initial trials identified, the 12 chosen for study were those in which disease-specific and all-cause mortality could be determined. The 12 chosen studies included 7 of mammography, 3 of fecal occult blood testing, and 2 of chest x-rays for lung cancer.
STUDY DESIGN AND VALIDITY: The researchers used a list published in a text on cancer screening to identify randomized trials for inclusion in this study. Updated information from each of the trials was obtained by performing a PubMed search of authors’ names and other relevant terms. This was not an exhaustive, systematic review of the literature. A more extensive literature search would have used multiple databases, evidence-based search methods, and possibly unpublished data. Very little information is given on the search terms used in PubMed. However, since this was a comparison of different outcome measures rather than a meta-analysis, a systematic review is not necessarily required.
OUTCOMES MEASURED: For each study, the difference in mortality between screened and unscreened (control) groups was reported as the screening benefit. The screening benefits from both disease-specific mortality and all-cause mortality data were then compared in terms of number of deaths per 10,000 person-years of observation.
RESULTS: One would expect that if a screening program decreased mortality related to the disease, overall mortality would be less as well. The authors found that this correlation did not occur in most of these studies. Five of the studies found that disease-related mortality and overall mortality went in different directions. Three of these 5 studies reported a statistically significant benefit in disease-specific mortality, but the all-cause mortality was either not affected or was worse. Two trials showed no benefit in disease-specific mortality but a trend in a positive or negative direction in all-cause mortality.
Although disease-specific mortality has been the standard for reporting mortality benefit in cancer screening, it does not necessarily correlate with significant benefits in all-cause mortality. In other words, some cancer screening may decrease deaths due to the screened disease, but patients still die at the same (or even higher) rate despite the screening. Inconsistent results are evident in trials studying mammography screening for breast cancer, fecal occult blood testing for colon cancer, and chest x-ray screening for lung cancer. When deciding whether a screening intervention is potentially beneficial, we may be misled by reports of disease-specific mortality.
Does use of oxytocin and dinoprostone inserts shorten labor more than use of oxytocin after removal of dinoprostone?
ABSTRACT
BACKGROUND: Simultaneous use of oxytocin and prostaglandin E2 preparations may offer a more efficient approach to labor induction by shortening the induction to delivery time. However, the manufacturer of sustained-release dinoprostone inserts warns against concurrent use with oxyytocin since the risks of uterine hyperactivity and complications are unknown. This study compared the use of oxytocin immediately after placement of a sustained-release dinoprostone insert with delayed use of oxytocin after removal of dinoprostone.
POPULATION STUDIED: The study included 71 women who presented to the University of New Mexico Health Sciences Center with indications for labor induction, singleton gestations with cephalic presentation, intact membranes, reactive nonstress tests, no previous uterine surgery, and unfavorable cervices (Bishop score 6). These patients are similar to those encountered in a primary care setting. Women with vaginal bleeding, more than 2 contractions in 10 minutes, asthma, known hypersensitivity to prostaglandins, or conditions that would contraindicate the induction of labor were excluded.
STUDY DESIGN AND VALIDITY: Women were randomly assigned (concealed allocation assignment) to either low-dose oxytocin infusion (2 mU/min with 2-mU/min increases every 20 minutes, up to a maximum dose of 36 mU/min) started either 10 minutes after placement of a 10-mg sustained-release dinoprostone insert (immediate group) or 30 minutes after the removal of the insert (delayed group). Inserts were left in place for 12 hours if possible. The exact time of dioprostone insert placement into the posterior fornix was recorded. Evaluation of the cervix and Bishop scoring were performed prior to placement and immediately following removal of the insert. Two investigators blinded to group assignment monitored tracings of contractions.
OUTCOMES MEASURED: The primary outcome measured was the time from induction to delivery. Secondary outcomes included changes in cervical score at 12 hours, frequency of deliveries within 24 hours, incidence of uterine hyperstimulation, rate of cesarean deliveries, and maternal and neonatal complications.
RESULTS: The mean induction to delivery time was 972 minutes in the immediate group versus 1516 minutes in the delayed group (P = .001). The change in Bishop score at the time of the insert removal was significantly greater in the immediate oxytocin group as compared with the delayed oxytocin group (P = .01). Immediate versus delayed administration of oxytocin increased the likelihood of delivery within 24 hours of induction (90% vs 53%, respectively; P = .002). No cases of hyperstimulation syndrome occurred with the immediate group versus 3 cases in the delayed group (P = .24). Cesarean delivery rates were similar (16% vs 13% for the immediate and delayed groups, respectively; P = .73), and cesarean deliveries were needed only in nulliparous women. No women developed intrapartum chorioamnionitis, and 1 woman in each group developed postpartum endometritis. Neonatal Apgar scores measuring less than 7 at 5 minutes were similar between groups (0% vs 6% for the immediate and delayed groups, respectively; P = .49).
Concurrent administration of oxytocin and sustained-release dinoprostone (prostaglandin) reduced the time from induction to delivery compared to oxytocin after removal of dinoprostone. This study found no increased risk of adverse events with concurrent administration. However, caution should be applied when using this concurrent therapy regimen until maternal and neonatal safety has been properly evaluated with larger studies.
ABSTRACT
BACKGROUND: Simultaneous use of oxytocin and prostaglandin E2 preparations may offer a more efficient approach to labor induction by shortening the induction to delivery time. However, the manufacturer of sustained-release dinoprostone inserts warns against concurrent use with oxyytocin since the risks of uterine hyperactivity and complications are unknown. This study compared the use of oxytocin immediately after placement of a sustained-release dinoprostone insert with delayed use of oxytocin after removal of dinoprostone.
POPULATION STUDIED: The study included 71 women who presented to the University of New Mexico Health Sciences Center with indications for labor induction, singleton gestations with cephalic presentation, intact membranes, reactive nonstress tests, no previous uterine surgery, and unfavorable cervices (Bishop score 6). These patients are similar to those encountered in a primary care setting. Women with vaginal bleeding, more than 2 contractions in 10 minutes, asthma, known hypersensitivity to prostaglandins, or conditions that would contraindicate the induction of labor were excluded.
STUDY DESIGN AND VALIDITY: Women were randomly assigned (concealed allocation assignment) to either low-dose oxytocin infusion (2 mU/min with 2-mU/min increases every 20 minutes, up to a maximum dose of 36 mU/min) started either 10 minutes after placement of a 10-mg sustained-release dinoprostone insert (immediate group) or 30 minutes after the removal of the insert (delayed group). Inserts were left in place for 12 hours if possible. The exact time of dioprostone insert placement into the posterior fornix was recorded. Evaluation of the cervix and Bishop scoring were performed prior to placement and immediately following removal of the insert. Two investigators blinded to group assignment monitored tracings of contractions.
OUTCOMES MEASURED: The primary outcome measured was the time from induction to delivery. Secondary outcomes included changes in cervical score at 12 hours, frequency of deliveries within 24 hours, incidence of uterine hyperstimulation, rate of cesarean deliveries, and maternal and neonatal complications.
RESULTS: The mean induction to delivery time was 972 minutes in the immediate group versus 1516 minutes in the delayed group (P = .001). The change in Bishop score at the time of the insert removal was significantly greater in the immediate oxytocin group as compared with the delayed oxytocin group (P = .01). Immediate versus delayed administration of oxytocin increased the likelihood of delivery within 24 hours of induction (90% vs 53%, respectively; P = .002). No cases of hyperstimulation syndrome occurred with the immediate group versus 3 cases in the delayed group (P = .24). Cesarean delivery rates were similar (16% vs 13% for the immediate and delayed groups, respectively; P = .73), and cesarean deliveries were needed only in nulliparous women. No women developed intrapartum chorioamnionitis, and 1 woman in each group developed postpartum endometritis. Neonatal Apgar scores measuring less than 7 at 5 minutes were similar between groups (0% vs 6% for the immediate and delayed groups, respectively; P = .49).
Concurrent administration of oxytocin and sustained-release dinoprostone (prostaglandin) reduced the time from induction to delivery compared to oxytocin after removal of dinoprostone. This study found no increased risk of adverse events with concurrent administration. However, caution should be applied when using this concurrent therapy regimen until maternal and neonatal safety has been properly evaluated with larger studies.
ABSTRACT
BACKGROUND: Simultaneous use of oxytocin and prostaglandin E2 preparations may offer a more efficient approach to labor induction by shortening the induction to delivery time. However, the manufacturer of sustained-release dinoprostone inserts warns against concurrent use with oxyytocin since the risks of uterine hyperactivity and complications are unknown. This study compared the use of oxytocin immediately after placement of a sustained-release dinoprostone insert with delayed use of oxytocin after removal of dinoprostone.
POPULATION STUDIED: The study included 71 women who presented to the University of New Mexico Health Sciences Center with indications for labor induction, singleton gestations with cephalic presentation, intact membranes, reactive nonstress tests, no previous uterine surgery, and unfavorable cervices (Bishop score 6). These patients are similar to those encountered in a primary care setting. Women with vaginal bleeding, more than 2 contractions in 10 minutes, asthma, known hypersensitivity to prostaglandins, or conditions that would contraindicate the induction of labor were excluded.
STUDY DESIGN AND VALIDITY: Women were randomly assigned (concealed allocation assignment) to either low-dose oxytocin infusion (2 mU/min with 2-mU/min increases every 20 minutes, up to a maximum dose of 36 mU/min) started either 10 minutes after placement of a 10-mg sustained-release dinoprostone insert (immediate group) or 30 minutes after the removal of the insert (delayed group). Inserts were left in place for 12 hours if possible. The exact time of dioprostone insert placement into the posterior fornix was recorded. Evaluation of the cervix and Bishop scoring were performed prior to placement and immediately following removal of the insert. Two investigators blinded to group assignment monitored tracings of contractions.
OUTCOMES MEASURED: The primary outcome measured was the time from induction to delivery. Secondary outcomes included changes in cervical score at 12 hours, frequency of deliveries within 24 hours, incidence of uterine hyperstimulation, rate of cesarean deliveries, and maternal and neonatal complications.
RESULTS: The mean induction to delivery time was 972 minutes in the immediate group versus 1516 minutes in the delayed group (P = .001). The change in Bishop score at the time of the insert removal was significantly greater in the immediate oxytocin group as compared with the delayed oxytocin group (P = .01). Immediate versus delayed administration of oxytocin increased the likelihood of delivery within 24 hours of induction (90% vs 53%, respectively; P = .002). No cases of hyperstimulation syndrome occurred with the immediate group versus 3 cases in the delayed group (P = .24). Cesarean delivery rates were similar (16% vs 13% for the immediate and delayed groups, respectively; P = .73), and cesarean deliveries were needed only in nulliparous women. No women developed intrapartum chorioamnionitis, and 1 woman in each group developed postpartum endometritis. Neonatal Apgar scores measuring less than 7 at 5 minutes were similar between groups (0% vs 6% for the immediate and delayed groups, respectively; P = .49).
Concurrent administration of oxytocin and sustained-release dinoprostone (prostaglandin) reduced the time from induction to delivery compared to oxytocin after removal of dinoprostone. This study found no increased risk of adverse events with concurrent administration. However, caution should be applied when using this concurrent therapy regimen until maternal and neonatal safety has been properly evaluated with larger studies.
Does uterine contraction frequency adequately predict preterm labor and delivery?
ABSTRACT
BACKGROUND: Ambulatory uterine activity monitoring in high-risk women continues despite the results from randomized trials indicating no relationship between monitoring and actual reduction of preterm delivery. The value of uterine contraction frequency as a predictor of preterm delivery, however, remains unclear.
POPULATION STUDIED: A total of 2205 women with a singleton gestation of longer than 22 weeks were screened; 454 met eligibility criteria. Data from 306 women were analyzed, including 254 high-risk women with either a history of preterm delivery (between 20 and 36 weeks’) or bleeding in the 2nd trimester of the current pregnancy, and 52 low-risk women. Exclusion criteria included previous or scheduled use of an ambulatory contraction monitor, use of tocolytic therapy, scheduled cerclage, placenta previa, major fetal anomalies, or no home phone. The mean age of participants was 26.2 years, with a mean parity of 1.8. The majority of participants were black (60%), with at least 12 years of education (74%). Many participants smoked (26%).
STUDY DESIGN AND VALIDITY: The authors used an observational study to determine whether the frequency of contractions could predict spontaneous preterm delivery at less than 35 weeks. Contractions were monitored for at least 30 minutes, twice a day (daytime and nighttime) on 2 or more days per week until 28 weeks, then 4 times per week. Two trained nurses, masked to risk status, analyzed monitor recordings. Contractions were defined as deflections from a clear baseline, with a rounded peak lasting 40 seconds to 120 seconds. Cervical examinations were performed every 2 to 3 weeks, beginning at 22 weeks, up to 6 times, depending on length of gestation. Data collected included cervicovaginal fluid for fetal fibronectin analysis, cervical length by transvaginal ultrasound, and assessment of Bishop score. Assessment of contraction recordings was validated by repeat audits during which samples were re-analyzed. Interpretation discordance occurred in 14% to 28% of recordings, but discrepancies were not greater than 1 contraction per hour.
OUTCOMES MEASURED: The primary outcome was the ability of uterine contraction frequency (daytime and nighttime) to predict spontaneous preterm delivery. In addition, fetal fibronectin, cervical length, and a Bishop score higher than 4 were studied as possible predictors at these same gestational ages.
RESULTS: There was no difference in frequency of contractions between the high-risk and low-risk group and therefore all data were pooled for analysis. The maximal frequency of contractions was inconsistently related to preterm delivery, with the largest association found for nighttime contractions at 27 to 28 weeks (odds ratio [OR] = 1.2; 95% CI, 1.1-1.4). Logistic regression revealed a consistent relationship between ultrasound cervical length and preterm delivery across all gestational age groupings, with statistically significant ORs ranging from 4.0 at 27 to 28 weeks to 7.5 at 31 to 33 weeks. The sensitivity for maximal daytime and nighttime contraction frequency was low, ranging from less than 10% at 22 to 24 weeks to 28% at 27 to 28 weeks and 31 to 33 weeks. Positive PPVs were correspondingly low, with none higher than 25%. Although the sensitivities for fetal fibronectin, ultrasound cervical length assessment, and Bishop scoring were generally somewhat higher (ranging from a low of 19% for fetal fibronectin at 22 to 24 weeks to a high of 82% for cervical length at 31 to 33 weeks) the corresponding PPVs were also low (range = 15% to 37%).
Uterine activity monitoring in asymptomatic high- and low-risk women is inadequate for predicting preterm birth. A recent systematic review of preterm labor management found home uterine activity monitoring by itself ineffective in preventing preterm birth.1 In the current study, contraction frequency monitoring has very poor sensitivity and a low positive predictive value (PPV) for spontaneous preterm delivery before 35 weeks’ gestation. Other commonly used screening tests, such as fetal fibronectin, cervical length assessment, and Bishop scoring, also generally have poor sensitivities and PPVs. The usefulness of any of these tests lies in the reassurance provided by a negative test result, as nearly all of them have negative predictive values of greater than 90%. Understanding, preventing, and treating known causes appears to offer the best current approach to reducing prematurity and its sequellae.
ABSTRACT
BACKGROUND: Ambulatory uterine activity monitoring in high-risk women continues despite the results from randomized trials indicating no relationship between monitoring and actual reduction of preterm delivery. The value of uterine contraction frequency as a predictor of preterm delivery, however, remains unclear.
POPULATION STUDIED: A total of 2205 women with a singleton gestation of longer than 22 weeks were screened; 454 met eligibility criteria. Data from 306 women were analyzed, including 254 high-risk women with either a history of preterm delivery (between 20 and 36 weeks’) or bleeding in the 2nd trimester of the current pregnancy, and 52 low-risk women. Exclusion criteria included previous or scheduled use of an ambulatory contraction monitor, use of tocolytic therapy, scheduled cerclage, placenta previa, major fetal anomalies, or no home phone. The mean age of participants was 26.2 years, with a mean parity of 1.8. The majority of participants were black (60%), with at least 12 years of education (74%). Many participants smoked (26%).
STUDY DESIGN AND VALIDITY: The authors used an observational study to determine whether the frequency of contractions could predict spontaneous preterm delivery at less than 35 weeks. Contractions were monitored for at least 30 minutes, twice a day (daytime and nighttime) on 2 or more days per week until 28 weeks, then 4 times per week. Two trained nurses, masked to risk status, analyzed monitor recordings. Contractions were defined as deflections from a clear baseline, with a rounded peak lasting 40 seconds to 120 seconds. Cervical examinations were performed every 2 to 3 weeks, beginning at 22 weeks, up to 6 times, depending on length of gestation. Data collected included cervicovaginal fluid for fetal fibronectin analysis, cervical length by transvaginal ultrasound, and assessment of Bishop score. Assessment of contraction recordings was validated by repeat audits during which samples were re-analyzed. Interpretation discordance occurred in 14% to 28% of recordings, but discrepancies were not greater than 1 contraction per hour.
OUTCOMES MEASURED: The primary outcome was the ability of uterine contraction frequency (daytime and nighttime) to predict spontaneous preterm delivery. In addition, fetal fibronectin, cervical length, and a Bishop score higher than 4 were studied as possible predictors at these same gestational ages.
RESULTS: There was no difference in frequency of contractions between the high-risk and low-risk group and therefore all data were pooled for analysis. The maximal frequency of contractions was inconsistently related to preterm delivery, with the largest association found for nighttime contractions at 27 to 28 weeks (odds ratio [OR] = 1.2; 95% CI, 1.1-1.4). Logistic regression revealed a consistent relationship between ultrasound cervical length and preterm delivery across all gestational age groupings, with statistically significant ORs ranging from 4.0 at 27 to 28 weeks to 7.5 at 31 to 33 weeks. The sensitivity for maximal daytime and nighttime contraction frequency was low, ranging from less than 10% at 22 to 24 weeks to 28% at 27 to 28 weeks and 31 to 33 weeks. Positive PPVs were correspondingly low, with none higher than 25%. Although the sensitivities for fetal fibronectin, ultrasound cervical length assessment, and Bishop scoring were generally somewhat higher (ranging from a low of 19% for fetal fibronectin at 22 to 24 weeks to a high of 82% for cervical length at 31 to 33 weeks) the corresponding PPVs were also low (range = 15% to 37%).
Uterine activity monitoring in asymptomatic high- and low-risk women is inadequate for predicting preterm birth. A recent systematic review of preterm labor management found home uterine activity monitoring by itself ineffective in preventing preterm birth.1 In the current study, contraction frequency monitoring has very poor sensitivity and a low positive predictive value (PPV) for spontaneous preterm delivery before 35 weeks’ gestation. Other commonly used screening tests, such as fetal fibronectin, cervical length assessment, and Bishop scoring, also generally have poor sensitivities and PPVs. The usefulness of any of these tests lies in the reassurance provided by a negative test result, as nearly all of them have negative predictive values of greater than 90%. Understanding, preventing, and treating known causes appears to offer the best current approach to reducing prematurity and its sequellae.
ABSTRACT
BACKGROUND: Ambulatory uterine activity monitoring in high-risk women continues despite the results from randomized trials indicating no relationship between monitoring and actual reduction of preterm delivery. The value of uterine contraction frequency as a predictor of preterm delivery, however, remains unclear.
POPULATION STUDIED: A total of 2205 women with a singleton gestation of longer than 22 weeks were screened; 454 met eligibility criteria. Data from 306 women were analyzed, including 254 high-risk women with either a history of preterm delivery (between 20 and 36 weeks’) or bleeding in the 2nd trimester of the current pregnancy, and 52 low-risk women. Exclusion criteria included previous or scheduled use of an ambulatory contraction monitor, use of tocolytic therapy, scheduled cerclage, placenta previa, major fetal anomalies, or no home phone. The mean age of participants was 26.2 years, with a mean parity of 1.8. The majority of participants were black (60%), with at least 12 years of education (74%). Many participants smoked (26%).
STUDY DESIGN AND VALIDITY: The authors used an observational study to determine whether the frequency of contractions could predict spontaneous preterm delivery at less than 35 weeks. Contractions were monitored for at least 30 minutes, twice a day (daytime and nighttime) on 2 or more days per week until 28 weeks, then 4 times per week. Two trained nurses, masked to risk status, analyzed monitor recordings. Contractions were defined as deflections from a clear baseline, with a rounded peak lasting 40 seconds to 120 seconds. Cervical examinations were performed every 2 to 3 weeks, beginning at 22 weeks, up to 6 times, depending on length of gestation. Data collected included cervicovaginal fluid for fetal fibronectin analysis, cervical length by transvaginal ultrasound, and assessment of Bishop score. Assessment of contraction recordings was validated by repeat audits during which samples were re-analyzed. Interpretation discordance occurred in 14% to 28% of recordings, but discrepancies were not greater than 1 contraction per hour.
OUTCOMES MEASURED: The primary outcome was the ability of uterine contraction frequency (daytime and nighttime) to predict spontaneous preterm delivery. In addition, fetal fibronectin, cervical length, and a Bishop score higher than 4 were studied as possible predictors at these same gestational ages.
RESULTS: There was no difference in frequency of contractions between the high-risk and low-risk group and therefore all data were pooled for analysis. The maximal frequency of contractions was inconsistently related to preterm delivery, with the largest association found for nighttime contractions at 27 to 28 weeks (odds ratio [OR] = 1.2; 95% CI, 1.1-1.4). Logistic regression revealed a consistent relationship between ultrasound cervical length and preterm delivery across all gestational age groupings, with statistically significant ORs ranging from 4.0 at 27 to 28 weeks to 7.5 at 31 to 33 weeks. The sensitivity for maximal daytime and nighttime contraction frequency was low, ranging from less than 10% at 22 to 24 weeks to 28% at 27 to 28 weeks and 31 to 33 weeks. Positive PPVs were correspondingly low, with none higher than 25%. Although the sensitivities for fetal fibronectin, ultrasound cervical length assessment, and Bishop scoring were generally somewhat higher (ranging from a low of 19% for fetal fibronectin at 22 to 24 weeks to a high of 82% for cervical length at 31 to 33 weeks) the corresponding PPVs were also low (range = 15% to 37%).
Uterine activity monitoring in asymptomatic high- and low-risk women is inadequate for predicting preterm birth. A recent systematic review of preterm labor management found home uterine activity monitoring by itself ineffective in preventing preterm birth.1 In the current study, contraction frequency monitoring has very poor sensitivity and a low positive predictive value (PPV) for spontaneous preterm delivery before 35 weeks’ gestation. Other commonly used screening tests, such as fetal fibronectin, cervical length assessment, and Bishop scoring, also generally have poor sensitivities and PPVs. The usefulness of any of these tests lies in the reassurance provided by a negative test result, as nearly all of them have negative predictive values of greater than 90%. Understanding, preventing, and treating known causes appears to offer the best current approach to reducing prematurity and its sequellae.
Can aspirin prevent cardiovascular events in patients without known cardiovascular disease?
ABSTRACT
BACKGROUND: In patients with known cardiovascular disease aspirin has well-established benefits, including improved outcomes of ischemic CHD, stroke, and all-cause mortality. Because of their lower risk, it is less clear whether using aspirin for preventing cardiovascular disease is beneficial in patients without preexisting disease.
POPULATION STUDIED: This meta-analysis reviewed studies that evaluated the role of aspirin in patients with no previous history of cardiovascular disease, including myocardial infarction (MI), stroke, angina, transient ischemic attack, and peripheral vascular disease. The authors excluded trials in which more than 10% of participants had diagnosed vascular disease. Of the approximately 50,000 patients included, most were middle-aged men.
STUDY DESIGN AND VALIDITY: This study was a meta-analysis of RCTs used as evidence for the US Preventive Services Task Force (USPSTF) in developing recommendations for the use of aspirin in the primary prevention of cardiovascular disease. The authors conducted a MEDLINE search for RCTs comparing aspirin with placebo (or simply no aspirin) in patients with no previous history of cardiovascular disease; these studies measured the outcomes of MI, stroke, and mortality. The authors included case-control and systematic reviews or meta-analyses in addition to RCTs to assess any harm of aspirin use (eg, rates of hemorrhagic stroke or gastrointestinal bleeding).
OUTCOMES MEASURED: The authors combined data from the RCTs for the following outcomes: total CHD events, (defined as nonfatal MI or death due to CHD), stroke, and all-cause mortality. For assessing adverse effects of aspirin, the investigators extracted rates of hemorrhagic stroke and major gastrointestinal bleeding events.
RESULTS: Patients taking aspirin had a lower risk of a CHD event (odds ratio [OR] = 0.72; 95% CI, 0.60 -0.87), which equates to a number needed to treat (NNT) of 195 patients to prevent 1 nonfatal MI or death due to CHD. For comparison, treatment of severe hypertension benefits 1 in 15 patients, but treatment of mild hypertension benefits 1 in 700 treated patients. In their subgroup analysis the authors found that the effect of aspirin in preventing CHD events in women was smaller than in men and not statistically significant. They concluded that it remains unclear as to whether gender influences the effects of aspirin. Regarding prevention of stroke and all-cause mortality, there was no significant benefit in taking aspirin.
Discuss the potential risks and benefits of aspirin with your patients, especially those at increased risk for cardiovascular disease. This meta-analysis of randomized controlled trials (RCTs), which included mostly middle-aged men, showed aspirin can prevent a first heart attack in patients without known cardiovascular disease. The Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure gives a grade A recommendation for discussing aspirin with men older than 40 years, postmenopausal women, and patients with risk factors for coronary heart disease (CHD), such as hypertension, diabetes, or smoking.
ABSTRACT
BACKGROUND: In patients with known cardiovascular disease aspirin has well-established benefits, including improved outcomes of ischemic CHD, stroke, and all-cause mortality. Because of their lower risk, it is less clear whether using aspirin for preventing cardiovascular disease is beneficial in patients without preexisting disease.
POPULATION STUDIED: This meta-analysis reviewed studies that evaluated the role of aspirin in patients with no previous history of cardiovascular disease, including myocardial infarction (MI), stroke, angina, transient ischemic attack, and peripheral vascular disease. The authors excluded trials in which more than 10% of participants had diagnosed vascular disease. Of the approximately 50,000 patients included, most were middle-aged men.
STUDY DESIGN AND VALIDITY: This study was a meta-analysis of RCTs used as evidence for the US Preventive Services Task Force (USPSTF) in developing recommendations for the use of aspirin in the primary prevention of cardiovascular disease. The authors conducted a MEDLINE search for RCTs comparing aspirin with placebo (or simply no aspirin) in patients with no previous history of cardiovascular disease; these studies measured the outcomes of MI, stroke, and mortality. The authors included case-control and systematic reviews or meta-analyses in addition to RCTs to assess any harm of aspirin use (eg, rates of hemorrhagic stroke or gastrointestinal bleeding).
OUTCOMES MEASURED: The authors combined data from the RCTs for the following outcomes: total CHD events, (defined as nonfatal MI or death due to CHD), stroke, and all-cause mortality. For assessing adverse effects of aspirin, the investigators extracted rates of hemorrhagic stroke and major gastrointestinal bleeding events.
RESULTS: Patients taking aspirin had a lower risk of a CHD event (odds ratio [OR] = 0.72; 95% CI, 0.60 -0.87), which equates to a number needed to treat (NNT) of 195 patients to prevent 1 nonfatal MI or death due to CHD. For comparison, treatment of severe hypertension benefits 1 in 15 patients, but treatment of mild hypertension benefits 1 in 700 treated patients. In their subgroup analysis the authors found that the effect of aspirin in preventing CHD events in women was smaller than in men and not statistically significant. They concluded that it remains unclear as to whether gender influences the effects of aspirin. Regarding prevention of stroke and all-cause mortality, there was no significant benefit in taking aspirin.
Discuss the potential risks and benefits of aspirin with your patients, especially those at increased risk for cardiovascular disease. This meta-analysis of randomized controlled trials (RCTs), which included mostly middle-aged men, showed aspirin can prevent a first heart attack in patients without known cardiovascular disease. The Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure gives a grade A recommendation for discussing aspirin with men older than 40 years, postmenopausal women, and patients with risk factors for coronary heart disease (CHD), such as hypertension, diabetes, or smoking.
ABSTRACT
BACKGROUND: In patients with known cardiovascular disease aspirin has well-established benefits, including improved outcomes of ischemic CHD, stroke, and all-cause mortality. Because of their lower risk, it is less clear whether using aspirin for preventing cardiovascular disease is beneficial in patients without preexisting disease.
POPULATION STUDIED: This meta-analysis reviewed studies that evaluated the role of aspirin in patients with no previous history of cardiovascular disease, including myocardial infarction (MI), stroke, angina, transient ischemic attack, and peripheral vascular disease. The authors excluded trials in which more than 10% of participants had diagnosed vascular disease. Of the approximately 50,000 patients included, most were middle-aged men.
STUDY DESIGN AND VALIDITY: This study was a meta-analysis of RCTs used as evidence for the US Preventive Services Task Force (USPSTF) in developing recommendations for the use of aspirin in the primary prevention of cardiovascular disease. The authors conducted a MEDLINE search for RCTs comparing aspirin with placebo (or simply no aspirin) in patients with no previous history of cardiovascular disease; these studies measured the outcomes of MI, stroke, and mortality. The authors included case-control and systematic reviews or meta-analyses in addition to RCTs to assess any harm of aspirin use (eg, rates of hemorrhagic stroke or gastrointestinal bleeding).
OUTCOMES MEASURED: The authors combined data from the RCTs for the following outcomes: total CHD events, (defined as nonfatal MI or death due to CHD), stroke, and all-cause mortality. For assessing adverse effects of aspirin, the investigators extracted rates of hemorrhagic stroke and major gastrointestinal bleeding events.
RESULTS: Patients taking aspirin had a lower risk of a CHD event (odds ratio [OR] = 0.72; 95% CI, 0.60 -0.87), which equates to a number needed to treat (NNT) of 195 patients to prevent 1 nonfatal MI or death due to CHD. For comparison, treatment of severe hypertension benefits 1 in 15 patients, but treatment of mild hypertension benefits 1 in 700 treated patients. In their subgroup analysis the authors found that the effect of aspirin in preventing CHD events in women was smaller than in men and not statistically significant. They concluded that it remains unclear as to whether gender influences the effects of aspirin. Regarding prevention of stroke and all-cause mortality, there was no significant benefit in taking aspirin.
Discuss the potential risks and benefits of aspirin with your patients, especially those at increased risk for cardiovascular disease. This meta-analysis of randomized controlled trials (RCTs), which included mostly middle-aged men, showed aspirin can prevent a first heart attack in patients without known cardiovascular disease. The Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure gives a grade A recommendation for discussing aspirin with men older than 40 years, postmenopausal women, and patients with risk factors for coronary heart disease (CHD), such as hypertension, diabetes, or smoking.
Does fecal occult blood screening reduce colorectal cancer morbidity?
ABSTRACT
BACKGROUND: This is 1 of 3 randomized trials undertaken to demonstrate a reduction in mortality from CRC by annual or biennial screening with an FOBT. In this study, the authors report on their 13-year experience of biennial screening with FOBT and its effect on mortality from CRC. They also evaluated the possible influence of compliance with screening on mortality from CRC.
POPULATION STUDIED: In August 1985, 140,000 people aged 45 to 75 years were living in Funen, Denmark. On the basis of information obtained from public registers, inhabitants with a known history of CRC, colorectal adenomas, or any type of malignancy with distant spread were not included by the authors for randomization. A balanced randomization was carried out in groups of 14 (3 to the screening group, 3 to the control group, and 8 not enrolled). Married couples were allocated to the same group. Subjects in the screening group were mailed invitations requesting participation. Only those attending previous screening rounds were invited back for repeat screening. Subjects in the control group were not informed of their participation in the study. In total, 61,933 men and women were studied; 30,967 subjects were assigned to biennial screening with Hemoccult II and 30,966 in the control group received usual care. Subjects were followed up until death or August 1, 1998.
STUDY DESIGN AND VALIDITY: This was a population-based, randomized controlled trial. Randomization of subjects in this trial was performed in a single-blinded fashion. Hemoccult II was used without rehydration but with dietary restrictions (no red meat, fresh fruit, iron preparations, vitamin C, aspirin, or other nonsteroidal anti-inflammatory drugs). Subjects were asked to provide 2 fecal samples from each of 3 consecutive stools. Subjects with a positive FOBT result (1 or more blue slides) were offered colonoscopy. It is not known how many in this group may have received screening for CRC as part of their usual care. Events (CRC, adenoma, death) in both groups were tracked using public databases and registers. The authors were unaware of the subjects’ screening status during assessment of death certificates.
OUTCOMES MEASURED: The primary outcome measured was death from CRC.
RESULTS: The risk of death from CRC was significantly reduced in the screening group compared with the control group (relative risk [RR] = 0.82; 95% confidence interval [CI], 0.69-0.97), even after adjusting for age, sex, and complications from treatment (RR = 0.86; 95% CI, 0.73–1.0). There was no difference in the rate of all-cause mortality between groups. In the screening group, the cumulative risk of having a positive test result was 5% over 13 years and 7 rounds of screening. Of those who tested positively, 94% went on to have at least 1 colonoscopy. There were 55 fewer deaths due to CRC in the screening group over 13 years in a population of 30,762 patients invited for screening. That is, screening saved 1 life for every 559 patients screened every other year for 13 years. Subjects who refused any screening had a significantly increased risk of death from CRC compared with those who participated in all screening rounds (RR = 1.65; 95% CI, 1.30-2.08).
Use of the fecal occult blood test (FOBT) every other year for 13 years to screen patients aged 45 years to 75 years will save 1 life for every 559 patients screened. Screening with FOBT does not alter the risk of death from all causes, which is felt by some physicians to be a more unbiased end point than cancer-specific mortality.1 This study, and others, suggests that individuals who refuse screening with FOBT may be at increased risk of dying from colorectal cancer (CRC). Special efforts should be made to ensure their participation in screening programs.2
ABSTRACT
BACKGROUND: This is 1 of 3 randomized trials undertaken to demonstrate a reduction in mortality from CRC by annual or biennial screening with an FOBT. In this study, the authors report on their 13-year experience of biennial screening with FOBT and its effect on mortality from CRC. They also evaluated the possible influence of compliance with screening on mortality from CRC.
POPULATION STUDIED: In August 1985, 140,000 people aged 45 to 75 years were living in Funen, Denmark. On the basis of information obtained from public registers, inhabitants with a known history of CRC, colorectal adenomas, or any type of malignancy with distant spread were not included by the authors for randomization. A balanced randomization was carried out in groups of 14 (3 to the screening group, 3 to the control group, and 8 not enrolled). Married couples were allocated to the same group. Subjects in the screening group were mailed invitations requesting participation. Only those attending previous screening rounds were invited back for repeat screening. Subjects in the control group were not informed of their participation in the study. In total, 61,933 men and women were studied; 30,967 subjects were assigned to biennial screening with Hemoccult II and 30,966 in the control group received usual care. Subjects were followed up until death or August 1, 1998.
STUDY DESIGN AND VALIDITY: This was a population-based, randomized controlled trial. Randomization of subjects in this trial was performed in a single-blinded fashion. Hemoccult II was used without rehydration but with dietary restrictions (no red meat, fresh fruit, iron preparations, vitamin C, aspirin, or other nonsteroidal anti-inflammatory drugs). Subjects were asked to provide 2 fecal samples from each of 3 consecutive stools. Subjects with a positive FOBT result (1 or more blue slides) were offered colonoscopy. It is not known how many in this group may have received screening for CRC as part of their usual care. Events (CRC, adenoma, death) in both groups were tracked using public databases and registers. The authors were unaware of the subjects’ screening status during assessment of death certificates.
OUTCOMES MEASURED: The primary outcome measured was death from CRC.
RESULTS: The risk of death from CRC was significantly reduced in the screening group compared with the control group (relative risk [RR] = 0.82; 95% confidence interval [CI], 0.69-0.97), even after adjusting for age, sex, and complications from treatment (RR = 0.86; 95% CI, 0.73–1.0). There was no difference in the rate of all-cause mortality between groups. In the screening group, the cumulative risk of having a positive test result was 5% over 13 years and 7 rounds of screening. Of those who tested positively, 94% went on to have at least 1 colonoscopy. There were 55 fewer deaths due to CRC in the screening group over 13 years in a population of 30,762 patients invited for screening. That is, screening saved 1 life for every 559 patients screened every other year for 13 years. Subjects who refused any screening had a significantly increased risk of death from CRC compared with those who participated in all screening rounds (RR = 1.65; 95% CI, 1.30-2.08).
Use of the fecal occult blood test (FOBT) every other year for 13 years to screen patients aged 45 years to 75 years will save 1 life for every 559 patients screened. Screening with FOBT does not alter the risk of death from all causes, which is felt by some physicians to be a more unbiased end point than cancer-specific mortality.1 This study, and others, suggests that individuals who refuse screening with FOBT may be at increased risk of dying from colorectal cancer (CRC). Special efforts should be made to ensure their participation in screening programs.2
ABSTRACT
BACKGROUND: This is 1 of 3 randomized trials undertaken to demonstrate a reduction in mortality from CRC by annual or biennial screening with an FOBT. In this study, the authors report on their 13-year experience of biennial screening with FOBT and its effect on mortality from CRC. They also evaluated the possible influence of compliance with screening on mortality from CRC.
POPULATION STUDIED: In August 1985, 140,000 people aged 45 to 75 years were living in Funen, Denmark. On the basis of information obtained from public registers, inhabitants with a known history of CRC, colorectal adenomas, or any type of malignancy with distant spread were not included by the authors for randomization. A balanced randomization was carried out in groups of 14 (3 to the screening group, 3 to the control group, and 8 not enrolled). Married couples were allocated to the same group. Subjects in the screening group were mailed invitations requesting participation. Only those attending previous screening rounds were invited back for repeat screening. Subjects in the control group were not informed of their participation in the study. In total, 61,933 men and women were studied; 30,967 subjects were assigned to biennial screening with Hemoccult II and 30,966 in the control group received usual care. Subjects were followed up until death or August 1, 1998.
STUDY DESIGN AND VALIDITY: This was a population-based, randomized controlled trial. Randomization of subjects in this trial was performed in a single-blinded fashion. Hemoccult II was used without rehydration but with dietary restrictions (no red meat, fresh fruit, iron preparations, vitamin C, aspirin, or other nonsteroidal anti-inflammatory drugs). Subjects were asked to provide 2 fecal samples from each of 3 consecutive stools. Subjects with a positive FOBT result (1 or more blue slides) were offered colonoscopy. It is not known how many in this group may have received screening for CRC as part of their usual care. Events (CRC, adenoma, death) in both groups were tracked using public databases and registers. The authors were unaware of the subjects’ screening status during assessment of death certificates.
OUTCOMES MEASURED: The primary outcome measured was death from CRC.
RESULTS: The risk of death from CRC was significantly reduced in the screening group compared with the control group (relative risk [RR] = 0.82; 95% confidence interval [CI], 0.69-0.97), even after adjusting for age, sex, and complications from treatment (RR = 0.86; 95% CI, 0.73–1.0). There was no difference in the rate of all-cause mortality between groups. In the screening group, the cumulative risk of having a positive test result was 5% over 13 years and 7 rounds of screening. Of those who tested positively, 94% went on to have at least 1 colonoscopy. There were 55 fewer deaths due to CRC in the screening group over 13 years in a population of 30,762 patients invited for screening. That is, screening saved 1 life for every 559 patients screened every other year for 13 years. Subjects who refused any screening had a significantly increased risk of death from CRC compared with those who participated in all screening rounds (RR = 1.65; 95% CI, 1.30-2.08).
Use of the fecal occult blood test (FOBT) every other year for 13 years to screen patients aged 45 years to 75 years will save 1 life for every 559 patients screened. Screening with FOBT does not alter the risk of death from all causes, which is felt by some physicians to be a more unbiased end point than cancer-specific mortality.1 This study, and others, suggests that individuals who refuse screening with FOBT may be at increased risk of dying from colorectal cancer (CRC). Special efforts should be made to ensure their participation in screening programs.2
Does intra-articular hyaluronate decrease symptoms of osteoarthritis of the knee?
ABSTRACT
BACKGROUND: Current therapies for osteoarthritis (OA) include long-term NSAIDs and joint replacement surgeries, but these are not without significant morbidity and mortality. HA is a joint component that acts as a shock absorber and lubricant, and its concentration declines with advancing age. “Viscosupplementation” is an intriguing idea as an alternative to exclusive treatment with NSAIDs. This study evaluated the effectiveness of hyaluronate injections to decrease symptoms associated with OA and improve functioning.
POPULATION STUDIED: The investigators of this study recruited 120 subjects from an outpatient referral center. Included patients displayed radiographic evidence of medial compartment unilateral knee OA grades 1 – 3. Allocation concealment was not mentioned, meaning that the investigators could have chosen patients on the basis of what therapy they were about to receive in the study.
STUDY DESIGN AND VALIDITY: This study was a randomized, controlled, double-blind comparison of (1) HA, (2) an NSAID, (3) both, or (4) neither. Physicians, patients, and analysis staff were all blinded. Each patient received both 3 weekly intra-articular knee injections of either placebo or hyaluronate sodium and 12 weeks of twice daily placebo or diclofenac 75 mg plus misoprostol 200 μg. The follow-up period lasted 12 weeks, with a 99.2% follow-up rate and 9.2% dropout rate. Pain, stiffness, and disability were evaluated at baseline and weeks 4 and 12 using the Western Ontario McMaster Universities (WOMAC) Index, a visual analog scale for pain and performance. Analysis was by intention-to-treat.
OUTCOMES MEASURED: The primary outcomes were patient-reported measures of pain, stiffness, and disability at baseline and weeks 4 and 12. Other outcomes were pain at rest and following walking and stepping activities.
RESULTS: The authors declared HA effective on the basis of changes within each group from baseline to the end of therapy. However, the accompanying editorial performed a more appropriate statistical analysis that evaluated the effect across all 4 groups and found no evidence to suggest that hyaluronate sodium in this trial is more effective than placebo.1
Contrary to the assertions of the authors, careful evaluation of the results of this study reveal that hyaluronic acid (HA) injection is no better than placebo in the treatment of osteoarthritis (OA) of the knee. Do not let yourself be fooled when shown this study – the analysis was not carried out across all 4 groups. When this was carried out, no benefit could be found.1 Previous studies have also failed to find a benefit of HA versus placebo. This is another good idea that does not work. For now, stick with acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs).
ABSTRACT
BACKGROUND: Current therapies for osteoarthritis (OA) include long-term NSAIDs and joint replacement surgeries, but these are not without significant morbidity and mortality. HA is a joint component that acts as a shock absorber and lubricant, and its concentration declines with advancing age. “Viscosupplementation” is an intriguing idea as an alternative to exclusive treatment with NSAIDs. This study evaluated the effectiveness of hyaluronate injections to decrease symptoms associated with OA and improve functioning.
POPULATION STUDIED: The investigators of this study recruited 120 subjects from an outpatient referral center. Included patients displayed radiographic evidence of medial compartment unilateral knee OA grades 1 – 3. Allocation concealment was not mentioned, meaning that the investigators could have chosen patients on the basis of what therapy they were about to receive in the study.
STUDY DESIGN AND VALIDITY: This study was a randomized, controlled, double-blind comparison of (1) HA, (2) an NSAID, (3) both, or (4) neither. Physicians, patients, and analysis staff were all blinded. Each patient received both 3 weekly intra-articular knee injections of either placebo or hyaluronate sodium and 12 weeks of twice daily placebo or diclofenac 75 mg plus misoprostol 200 μg. The follow-up period lasted 12 weeks, with a 99.2% follow-up rate and 9.2% dropout rate. Pain, stiffness, and disability were evaluated at baseline and weeks 4 and 12 using the Western Ontario McMaster Universities (WOMAC) Index, a visual analog scale for pain and performance. Analysis was by intention-to-treat.
OUTCOMES MEASURED: The primary outcomes were patient-reported measures of pain, stiffness, and disability at baseline and weeks 4 and 12. Other outcomes were pain at rest and following walking and stepping activities.
RESULTS: The authors declared HA effective on the basis of changes within each group from baseline to the end of therapy. However, the accompanying editorial performed a more appropriate statistical analysis that evaluated the effect across all 4 groups and found no evidence to suggest that hyaluronate sodium in this trial is more effective than placebo.1
Contrary to the assertions of the authors, careful evaluation of the results of this study reveal that hyaluronic acid (HA) injection is no better than placebo in the treatment of osteoarthritis (OA) of the knee. Do not let yourself be fooled when shown this study – the analysis was not carried out across all 4 groups. When this was carried out, no benefit could be found.1 Previous studies have also failed to find a benefit of HA versus placebo. This is another good idea that does not work. For now, stick with acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs).
ABSTRACT
BACKGROUND: Current therapies for osteoarthritis (OA) include long-term NSAIDs and joint replacement surgeries, but these are not without significant morbidity and mortality. HA is a joint component that acts as a shock absorber and lubricant, and its concentration declines with advancing age. “Viscosupplementation” is an intriguing idea as an alternative to exclusive treatment with NSAIDs. This study evaluated the effectiveness of hyaluronate injections to decrease symptoms associated with OA and improve functioning.
POPULATION STUDIED: The investigators of this study recruited 120 subjects from an outpatient referral center. Included patients displayed radiographic evidence of medial compartment unilateral knee OA grades 1 – 3. Allocation concealment was not mentioned, meaning that the investigators could have chosen patients on the basis of what therapy they were about to receive in the study.
STUDY DESIGN AND VALIDITY: This study was a randomized, controlled, double-blind comparison of (1) HA, (2) an NSAID, (3) both, or (4) neither. Physicians, patients, and analysis staff were all blinded. Each patient received both 3 weekly intra-articular knee injections of either placebo or hyaluronate sodium and 12 weeks of twice daily placebo or diclofenac 75 mg plus misoprostol 200 μg. The follow-up period lasted 12 weeks, with a 99.2% follow-up rate and 9.2% dropout rate. Pain, stiffness, and disability were evaluated at baseline and weeks 4 and 12 using the Western Ontario McMaster Universities (WOMAC) Index, a visual analog scale for pain and performance. Analysis was by intention-to-treat.
OUTCOMES MEASURED: The primary outcomes were patient-reported measures of pain, stiffness, and disability at baseline and weeks 4 and 12. Other outcomes were pain at rest and following walking and stepping activities.
RESULTS: The authors declared HA effective on the basis of changes within each group from baseline to the end of therapy. However, the accompanying editorial performed a more appropriate statistical analysis that evaluated the effect across all 4 groups and found no evidence to suggest that hyaluronate sodium in this trial is more effective than placebo.1
Contrary to the assertions of the authors, careful evaluation of the results of this study reveal that hyaluronic acid (HA) injection is no better than placebo in the treatment of osteoarthritis (OA) of the knee. Do not let yourself be fooled when shown this study – the analysis was not carried out across all 4 groups. When this was carried out, no benefit could be found.1 Previous studies have also failed to find a benefit of HA versus placebo. This is another good idea that does not work. For now, stick with acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs).
Do intranasal corticosteroids aid treatment of acute sinusitis in patients with a history of recurrent sinus symptoms?
ABSTRACT
BACKGROUND: The combination of antibiotics and inhaled intranasal corticosteroids for the treatment of chronic persistent sinusitis is a common clinical practice. Theoretically, nasally inhaled steroids should decrease mucosal inflammation and hasten recovery from an acute sinusitis. Previous small studies show a trend toward improvement with this regimen. This study measures the benefit of the addition of fluticasone to cefuroxime in patients with confirmed acute sinusitis and a documented history of chronic or recurrent sinusitis.
POPULATION STUDIED: Patients presenting with acute sinonasal symptoms and a history of previously diagnosed recurrent or chronic sinusitis requiring antibiotic treatment were enrolled from 22 sites (12 primary care and 10 otolaryngology clinics). Patients were aged 30 to 55 years; 68% were female and 88% were Caucasian. All patients were required to have evidence of sinus infection on either plain films (Waters view) or nasal endoscopy. Subjects were screened for major sinus symptoms with an instrument developed by the American Academy of Otolaryngology-Head and Neck Surgery. Exclusion criteria included previous sinus surgery, nasal polyposis, intranasal corticosteroid use within the previous 2 weeks, and prior antibiotic use within 7 days of enrollment in the study.
STUDY DESIGN AND VALIDITY: Ninety-five patients were randomly assigned in a double-blind fashion (concealed allocation assignment) to receive 2 puffs (200 μg/day) of fluticasone propionate (Flonase) or identical placebo nasal spray in each nostril once daily for 21 days. All patients also received 250 mg cefuroxime (Ceftin) twice daily for 10 days and 2 puffs of xylometazoline hydrochloride in each nostril twice daily for 3 days. Follow-up was complete in 93% of patients at 10, 21, and 56 days via telephone interview. Interviewers were blind to treatment group assignment.
OUTCOMES MEASURED: The primary outcome was the proportion of patients in each treatment arm who experienced clinical success at 10, 21, or 56 days. Clinical success was defined as a patient report of “cured” or “much improved.” Secondary outcomes included differences over time in the scores for sinusitis and general health quality of life as measured by the Sinonasal Outcome Test-20 (SNOT-20) and Short Form-12 (SF-12). All measures were taken during telephone interviews at 10, 21, and 56 days post enrollment.
RESULTS: Using intention-to-treat analysis, a higher proportion of patients in the fluticasone group achieved clinical success (93.5% vs 73.9%; P = .009; number needed to treat [NNT] = 6). No significant differences in treatment success rates were found between patients enrolled from otolaryngology vs primary care sites (P = .21). Patients in the fluticasone group also improved more rapidly (median of 6.0 days vs 9.5 days, P = .01). Differences in symptom scores between treatment groups were not significant, however, as measured by SNOT-20 (day 10, P = .8; day 21, P = .88; day 56, P = .54) and SF-12 (PCS-12, P = .39; MCS-12, P = .21). Reports of adverse effects were not significantly different between the groups (P = .07).
Intranasal corticosteroids increase patient-reported clinical success when used in addition to antibiotics for the treatment of acute sinusitis in patients with a history of recurrent sinusitis (NNT = 6). Although the primary outcome of patient-reported clinical success was improved in the treatment group, the symptom scores also reported by the patients were not significantly different between groups. The current study did not adequately define “recurrent,” but a previous study found a similar benefit of intranasal steroids plus antibiotics for patients reporting at least 2 sinus infections requiring antibiotic treatment per year for at least the previous 2 years.1 There is no evidence that steroids provide additional benefit to the treatment of simple acute sinusitis. In addition, children who are given intranasal steroids for upper respiratory infections are more likely to develop ear infections.2
ABSTRACT
BACKGROUND: The combination of antibiotics and inhaled intranasal corticosteroids for the treatment of chronic persistent sinusitis is a common clinical practice. Theoretically, nasally inhaled steroids should decrease mucosal inflammation and hasten recovery from an acute sinusitis. Previous small studies show a trend toward improvement with this regimen. This study measures the benefit of the addition of fluticasone to cefuroxime in patients with confirmed acute sinusitis and a documented history of chronic or recurrent sinusitis.
POPULATION STUDIED: Patients presenting with acute sinonasal symptoms and a history of previously diagnosed recurrent or chronic sinusitis requiring antibiotic treatment were enrolled from 22 sites (12 primary care and 10 otolaryngology clinics). Patients were aged 30 to 55 years; 68% were female and 88% were Caucasian. All patients were required to have evidence of sinus infection on either plain films (Waters view) or nasal endoscopy. Subjects were screened for major sinus symptoms with an instrument developed by the American Academy of Otolaryngology-Head and Neck Surgery. Exclusion criteria included previous sinus surgery, nasal polyposis, intranasal corticosteroid use within the previous 2 weeks, and prior antibiotic use within 7 days of enrollment in the study.
STUDY DESIGN AND VALIDITY: Ninety-five patients were randomly assigned in a double-blind fashion (concealed allocation assignment) to receive 2 puffs (200 μg/day) of fluticasone propionate (Flonase) or identical placebo nasal spray in each nostril once daily for 21 days. All patients also received 250 mg cefuroxime (Ceftin) twice daily for 10 days and 2 puffs of xylometazoline hydrochloride in each nostril twice daily for 3 days. Follow-up was complete in 93% of patients at 10, 21, and 56 days via telephone interview. Interviewers were blind to treatment group assignment.
OUTCOMES MEASURED: The primary outcome was the proportion of patients in each treatment arm who experienced clinical success at 10, 21, or 56 days. Clinical success was defined as a patient report of “cured” or “much improved.” Secondary outcomes included differences over time in the scores for sinusitis and general health quality of life as measured by the Sinonasal Outcome Test-20 (SNOT-20) and Short Form-12 (SF-12). All measures were taken during telephone interviews at 10, 21, and 56 days post enrollment.
RESULTS: Using intention-to-treat analysis, a higher proportion of patients in the fluticasone group achieved clinical success (93.5% vs 73.9%; P = .009; number needed to treat [NNT] = 6). No significant differences in treatment success rates were found between patients enrolled from otolaryngology vs primary care sites (P = .21). Patients in the fluticasone group also improved more rapidly (median of 6.0 days vs 9.5 days, P = .01). Differences in symptom scores between treatment groups were not significant, however, as measured by SNOT-20 (day 10, P = .8; day 21, P = .88; day 56, P = .54) and SF-12 (PCS-12, P = .39; MCS-12, P = .21). Reports of adverse effects were not significantly different between the groups (P = .07).
Intranasal corticosteroids increase patient-reported clinical success when used in addition to antibiotics for the treatment of acute sinusitis in patients with a history of recurrent sinusitis (NNT = 6). Although the primary outcome of patient-reported clinical success was improved in the treatment group, the symptom scores also reported by the patients were not significantly different between groups. The current study did not adequately define “recurrent,” but a previous study found a similar benefit of intranasal steroids plus antibiotics for patients reporting at least 2 sinus infections requiring antibiotic treatment per year for at least the previous 2 years.1 There is no evidence that steroids provide additional benefit to the treatment of simple acute sinusitis. In addition, children who are given intranasal steroids for upper respiratory infections are more likely to develop ear infections.2
ABSTRACT
BACKGROUND: The combination of antibiotics and inhaled intranasal corticosteroids for the treatment of chronic persistent sinusitis is a common clinical practice. Theoretically, nasally inhaled steroids should decrease mucosal inflammation and hasten recovery from an acute sinusitis. Previous small studies show a trend toward improvement with this regimen. This study measures the benefit of the addition of fluticasone to cefuroxime in patients with confirmed acute sinusitis and a documented history of chronic or recurrent sinusitis.
POPULATION STUDIED: Patients presenting with acute sinonasal symptoms and a history of previously diagnosed recurrent or chronic sinusitis requiring antibiotic treatment were enrolled from 22 sites (12 primary care and 10 otolaryngology clinics). Patients were aged 30 to 55 years; 68% were female and 88% were Caucasian. All patients were required to have evidence of sinus infection on either plain films (Waters view) or nasal endoscopy. Subjects were screened for major sinus symptoms with an instrument developed by the American Academy of Otolaryngology-Head and Neck Surgery. Exclusion criteria included previous sinus surgery, nasal polyposis, intranasal corticosteroid use within the previous 2 weeks, and prior antibiotic use within 7 days of enrollment in the study.
STUDY DESIGN AND VALIDITY: Ninety-five patients were randomly assigned in a double-blind fashion (concealed allocation assignment) to receive 2 puffs (200 μg/day) of fluticasone propionate (Flonase) or identical placebo nasal spray in each nostril once daily for 21 days. All patients also received 250 mg cefuroxime (Ceftin) twice daily for 10 days and 2 puffs of xylometazoline hydrochloride in each nostril twice daily for 3 days. Follow-up was complete in 93% of patients at 10, 21, and 56 days via telephone interview. Interviewers were blind to treatment group assignment.
OUTCOMES MEASURED: The primary outcome was the proportion of patients in each treatment arm who experienced clinical success at 10, 21, or 56 days. Clinical success was defined as a patient report of “cured” or “much improved.” Secondary outcomes included differences over time in the scores for sinusitis and general health quality of life as measured by the Sinonasal Outcome Test-20 (SNOT-20) and Short Form-12 (SF-12). All measures were taken during telephone interviews at 10, 21, and 56 days post enrollment.
RESULTS: Using intention-to-treat analysis, a higher proportion of patients in the fluticasone group achieved clinical success (93.5% vs 73.9%; P = .009; number needed to treat [NNT] = 6). No significant differences in treatment success rates were found between patients enrolled from otolaryngology vs primary care sites (P = .21). Patients in the fluticasone group also improved more rapidly (median of 6.0 days vs 9.5 days, P = .01). Differences in symptom scores between treatment groups were not significant, however, as measured by SNOT-20 (day 10, P = .8; day 21, P = .88; day 56, P = .54) and SF-12 (PCS-12, P = .39; MCS-12, P = .21). Reports of adverse effects were not significantly different between the groups (P = .07).
Intranasal corticosteroids increase patient-reported clinical success when used in addition to antibiotics for the treatment of acute sinusitis in patients with a history of recurrent sinusitis (NNT = 6). Although the primary outcome of patient-reported clinical success was improved in the treatment group, the symptom scores also reported by the patients were not significantly different between groups. The current study did not adequately define “recurrent,” but a previous study found a similar benefit of intranasal steroids plus antibiotics for patients reporting at least 2 sinus infections requiring antibiotic treatment per year for at least the previous 2 years.1 There is no evidence that steroids provide additional benefit to the treatment of simple acute sinusitis. In addition, children who are given intranasal steroids for upper respiratory infections are more likely to develop ear infections.2
Can a patient information sheet reduce antibiotic use in adult outpatients with acute bronchitis?
ABSTRACT
BACKGROUND: Inappropriate use of antibiotics for acute bronchitis can contribute to the growing incidence of bacterial resistance in the community. Although the majority of acute bronchitis cases are viral, patient expectations that antibiotics are required to treat this illness result in frequent prescribing of these drugs. This study investigates the use of written patient education regarding the role of antibiotics for acute bronchitis in an attempt to decrease antibiotic use.
POPULATION STUDIED: The researchers recruited 259 patients aged 16 years and older with acute bronchitis from 3 general practices in Nottingham, England. Patients were required to have acute cough and at least 1 other respiratory tract symptom. Patients were excluded with asthma, chronic obstructive pulmonary disease, heart disease, and diabetes. The median age was 44 years; 26% of patients were smokers; and 80% had a clear chest exam.
STUDY DESIGN AND VALIDITY: The patients’ individual physicians used their clinical judgment to divide the patients into 2 groups: those who definitely needed antibiotics and those who did not definitely need antibiotics. Patients in the first group did not participate in the study. Patients in the second group were randomized to receive either a blank sheet of paper or a patient information sheet explaining the natural history of acute bronchitis and discouraging the use of antibiotics (available at http://bmj.com/cgi/content/full/324/7329/91/F1). The physician, who was blinded to randomization, distributed the study sheet in a sealed envelope at the office visit; patients were asked to open the envelope after the visit.
OUTCOMES MEASURED: The primary endpoint in this study was whether the patient took the prescribed antibiotic. The secondary endpoint was the number of patients requiring a second office visit within a month for the same illness. Other patient-oriented outcomes such as patient satisfaction, number of sick days, and severity of illness were not directly measured, although the authors state that the rate of patient follow-up is a surrogate measure for these outcomes.
RESULTS: Of the 259 eligible patients, 212 entered the randomized trial. Forty-nine (47%) patients who received the information sheet took their antibiotics compared with 63 (62%) control patients (relative risk, 0.7; 95% CI, 0.59-0.97; P = .04). One additional patient did not take the antibiotic for every 7 patients given the information sheet (number needed to treat = 7). Amoxicillin was the prescribed antibiotic in 96% of both study groups. The number of patients scheduling a follow-up visit within 1 month was similar in both groups (11 patients who received the sheet versus 14 who did not).
In this study, a written patient information sheet along with verbal counseling from the physician stopped 1 additional patient of 7 from filling an antibiotic prescription of questionable necessity. There was no change in other patient outcomes. This intervention can decrease the cost of therapy and, theoretically, may contribute to slowing the spread of antibiotic resistance in the community.
ABSTRACT
BACKGROUND: Inappropriate use of antibiotics for acute bronchitis can contribute to the growing incidence of bacterial resistance in the community. Although the majority of acute bronchitis cases are viral, patient expectations that antibiotics are required to treat this illness result in frequent prescribing of these drugs. This study investigates the use of written patient education regarding the role of antibiotics for acute bronchitis in an attempt to decrease antibiotic use.
POPULATION STUDIED: The researchers recruited 259 patients aged 16 years and older with acute bronchitis from 3 general practices in Nottingham, England. Patients were required to have acute cough and at least 1 other respiratory tract symptom. Patients were excluded with asthma, chronic obstructive pulmonary disease, heart disease, and diabetes. The median age was 44 years; 26% of patients were smokers; and 80% had a clear chest exam.
STUDY DESIGN AND VALIDITY: The patients’ individual physicians used their clinical judgment to divide the patients into 2 groups: those who definitely needed antibiotics and those who did not definitely need antibiotics. Patients in the first group did not participate in the study. Patients in the second group were randomized to receive either a blank sheet of paper or a patient information sheet explaining the natural history of acute bronchitis and discouraging the use of antibiotics (available at http://bmj.com/cgi/content/full/324/7329/91/F1). The physician, who was blinded to randomization, distributed the study sheet in a sealed envelope at the office visit; patients were asked to open the envelope after the visit.
OUTCOMES MEASURED: The primary endpoint in this study was whether the patient took the prescribed antibiotic. The secondary endpoint was the number of patients requiring a second office visit within a month for the same illness. Other patient-oriented outcomes such as patient satisfaction, number of sick days, and severity of illness were not directly measured, although the authors state that the rate of patient follow-up is a surrogate measure for these outcomes.
RESULTS: Of the 259 eligible patients, 212 entered the randomized trial. Forty-nine (47%) patients who received the information sheet took their antibiotics compared with 63 (62%) control patients (relative risk, 0.7; 95% CI, 0.59-0.97; P = .04). One additional patient did not take the antibiotic for every 7 patients given the information sheet (number needed to treat = 7). Amoxicillin was the prescribed antibiotic in 96% of both study groups. The number of patients scheduling a follow-up visit within 1 month was similar in both groups (11 patients who received the sheet versus 14 who did not).
In this study, a written patient information sheet along with verbal counseling from the physician stopped 1 additional patient of 7 from filling an antibiotic prescription of questionable necessity. There was no change in other patient outcomes. This intervention can decrease the cost of therapy and, theoretically, may contribute to slowing the spread of antibiotic resistance in the community.
ABSTRACT
BACKGROUND: Inappropriate use of antibiotics for acute bronchitis can contribute to the growing incidence of bacterial resistance in the community. Although the majority of acute bronchitis cases are viral, patient expectations that antibiotics are required to treat this illness result in frequent prescribing of these drugs. This study investigates the use of written patient education regarding the role of antibiotics for acute bronchitis in an attempt to decrease antibiotic use.
POPULATION STUDIED: The researchers recruited 259 patients aged 16 years and older with acute bronchitis from 3 general practices in Nottingham, England. Patients were required to have acute cough and at least 1 other respiratory tract symptom. Patients were excluded with asthma, chronic obstructive pulmonary disease, heart disease, and diabetes. The median age was 44 years; 26% of patients were smokers; and 80% had a clear chest exam.
STUDY DESIGN AND VALIDITY: The patients’ individual physicians used their clinical judgment to divide the patients into 2 groups: those who definitely needed antibiotics and those who did not definitely need antibiotics. Patients in the first group did not participate in the study. Patients in the second group were randomized to receive either a blank sheet of paper or a patient information sheet explaining the natural history of acute bronchitis and discouraging the use of antibiotics (available at http://bmj.com/cgi/content/full/324/7329/91/F1). The physician, who was blinded to randomization, distributed the study sheet in a sealed envelope at the office visit; patients were asked to open the envelope after the visit.
OUTCOMES MEASURED: The primary endpoint in this study was whether the patient took the prescribed antibiotic. The secondary endpoint was the number of patients requiring a second office visit within a month for the same illness. Other patient-oriented outcomes such as patient satisfaction, number of sick days, and severity of illness were not directly measured, although the authors state that the rate of patient follow-up is a surrogate measure for these outcomes.
RESULTS: Of the 259 eligible patients, 212 entered the randomized trial. Forty-nine (47%) patients who received the information sheet took their antibiotics compared with 63 (62%) control patients (relative risk, 0.7; 95% CI, 0.59-0.97; P = .04). One additional patient did not take the antibiotic for every 7 patients given the information sheet (number needed to treat = 7). Amoxicillin was the prescribed antibiotic in 96% of both study groups. The number of patients scheduling a follow-up visit within 1 month was similar in both groups (11 patients who received the sheet versus 14 who did not).
In this study, a written patient information sheet along with verbal counseling from the physician stopped 1 additional patient of 7 from filling an antibiotic prescription of questionable necessity. There was no change in other patient outcomes. This intervention can decrease the cost of therapy and, theoretically, may contribute to slowing the spread of antibiotic resistance in the community.