User login
Which oral antihyperglycemics are most efficacious in reducing hemoglobin A1C in diabetic patients?
ABSTRACT
BACKGROUND: Many new oral medications have been developed to treat diabetes, but uncertainty remains regarding which are best for initial treatment and whether effectiveness rates differ. This review compares the available oral antihyperglycemics.
POPULATION STUDIED: A total of 63 randomized controlled clinical trials involving oral hypoglycemic drugs for type 2 diabetes was identified by a MED-LINE search and review of the bibliographies of articles found initially. Other inclusion criteria were study duration of at least 3 months, at least 10 subjects at the study’s conclusion, and hemoglobin A1C levels reported. Other search details, such as the year and key words of a study, were not mentioned. More than 15,000 subjects have been enrolled in the identified trials, but no information was given regarding important clinical characteristics such as age, ethnicity, body mass index, or medical conditions other than diabetes. Therefore, assessing generalizability of the data to typical patients of family practitioners is difficult.
STUDY DESIGN AND VALIDITY: The article lists available randomized clinical trials that evaluate sulfonylureas, metformin, α-glucosidase inhibitors (AGIs), thiazolidinediones (TZDs), and nonsulfonylurea secretagogues as monotherapy versus placebo, in head-to-head trials or in combination, and compares their outcomes in terms of hemoglobin A1C reduction. When multiple doses of a drug were tested, the results from the highest dose were used. There was no attempt to synthesize the data provided by the studies into a meta-analysis.
OUTCOMES MEASURED: The major outcome measured was percent hemoglobin A1C reduction. Side effects were mentioned but not quantified. Cost, patient satisfaction, and quality of life were not addressed.
RESULTS: Except for the UKPDS, all available studies of oral hypoglycemics are short term and are limited in focus to hemoglobin A1C. Each class of drugs achieved a similar initial reduction in hemoglobin A1C of 1% to 2% except for the AGIs and nateglinide, which were less effective. The results are remarkably consistent across studies. Head-to-head comparison of specific medications further supports this conclusion. When taken in combination, the effects on hemoglobin A1C are additive.
Despite the claims of pharmaceutical marketing, there is little difference among sulfonylureas, metformin, and thiazolidinediones in reduction of hemoglobin A1C. Each class achieves an average reduction of 1% to 2%. Alpha glucosidase inhibitors and nonsulfonylurea secretogogues are probably somewhat less efficacious; combinations of medications seem to be additive.
Clinicians should keep in mind that diet and exercise remain first-line treatment for type 2 diabetes. Initial drug therapy should be guided, however, by evidence about long-term outcomes, such as reduction in the risk of myocardial infarction, renal failure, and blindness; to date, only metformin and sulfonylureas have been shown to be beneficial in reducing microvascular complications. Only metformin has been shown to reduce macrovascular complications and all-cause mortality in obese patients with type 2 diabetes. Interestingly, this beneficial effect of metformin is totally independent of blood sugar control. Thus, metformin should be the pharmaceutical agent of first choice in the treatment of type 2 diabetes.
ABSTRACT
BACKGROUND: Many new oral medications have been developed to treat diabetes, but uncertainty remains regarding which are best for initial treatment and whether effectiveness rates differ. This review compares the available oral antihyperglycemics.
POPULATION STUDIED: A total of 63 randomized controlled clinical trials involving oral hypoglycemic drugs for type 2 diabetes was identified by a MED-LINE search and review of the bibliographies of articles found initially. Other inclusion criteria were study duration of at least 3 months, at least 10 subjects at the study’s conclusion, and hemoglobin A1C levels reported. Other search details, such as the year and key words of a study, were not mentioned. More than 15,000 subjects have been enrolled in the identified trials, but no information was given regarding important clinical characteristics such as age, ethnicity, body mass index, or medical conditions other than diabetes. Therefore, assessing generalizability of the data to typical patients of family practitioners is difficult.
STUDY DESIGN AND VALIDITY: The article lists available randomized clinical trials that evaluate sulfonylureas, metformin, α-glucosidase inhibitors (AGIs), thiazolidinediones (TZDs), and nonsulfonylurea secretagogues as monotherapy versus placebo, in head-to-head trials or in combination, and compares their outcomes in terms of hemoglobin A1C reduction. When multiple doses of a drug were tested, the results from the highest dose were used. There was no attempt to synthesize the data provided by the studies into a meta-analysis.
OUTCOMES MEASURED: The major outcome measured was percent hemoglobin A1C reduction. Side effects were mentioned but not quantified. Cost, patient satisfaction, and quality of life were not addressed.
RESULTS: Except for the UKPDS, all available studies of oral hypoglycemics are short term and are limited in focus to hemoglobin A1C. Each class of drugs achieved a similar initial reduction in hemoglobin A1C of 1% to 2% except for the AGIs and nateglinide, which were less effective. The results are remarkably consistent across studies. Head-to-head comparison of specific medications further supports this conclusion. When taken in combination, the effects on hemoglobin A1C are additive.
Despite the claims of pharmaceutical marketing, there is little difference among sulfonylureas, metformin, and thiazolidinediones in reduction of hemoglobin A1C. Each class achieves an average reduction of 1% to 2%. Alpha glucosidase inhibitors and nonsulfonylurea secretogogues are probably somewhat less efficacious; combinations of medications seem to be additive.
Clinicians should keep in mind that diet and exercise remain first-line treatment for type 2 diabetes. Initial drug therapy should be guided, however, by evidence about long-term outcomes, such as reduction in the risk of myocardial infarction, renal failure, and blindness; to date, only metformin and sulfonylureas have been shown to be beneficial in reducing microvascular complications. Only metformin has been shown to reduce macrovascular complications and all-cause mortality in obese patients with type 2 diabetes. Interestingly, this beneficial effect of metformin is totally independent of blood sugar control. Thus, metformin should be the pharmaceutical agent of first choice in the treatment of type 2 diabetes.
ABSTRACT
BACKGROUND: Many new oral medications have been developed to treat diabetes, but uncertainty remains regarding which are best for initial treatment and whether effectiveness rates differ. This review compares the available oral antihyperglycemics.
POPULATION STUDIED: A total of 63 randomized controlled clinical trials involving oral hypoglycemic drugs for type 2 diabetes was identified by a MED-LINE search and review of the bibliographies of articles found initially. Other inclusion criteria were study duration of at least 3 months, at least 10 subjects at the study’s conclusion, and hemoglobin A1C levels reported. Other search details, such as the year and key words of a study, were not mentioned. More than 15,000 subjects have been enrolled in the identified trials, but no information was given regarding important clinical characteristics such as age, ethnicity, body mass index, or medical conditions other than diabetes. Therefore, assessing generalizability of the data to typical patients of family practitioners is difficult.
STUDY DESIGN AND VALIDITY: The article lists available randomized clinical trials that evaluate sulfonylureas, metformin, α-glucosidase inhibitors (AGIs), thiazolidinediones (TZDs), and nonsulfonylurea secretagogues as monotherapy versus placebo, in head-to-head trials or in combination, and compares their outcomes in terms of hemoglobin A1C reduction. When multiple doses of a drug were tested, the results from the highest dose were used. There was no attempt to synthesize the data provided by the studies into a meta-analysis.
OUTCOMES MEASURED: The major outcome measured was percent hemoglobin A1C reduction. Side effects were mentioned but not quantified. Cost, patient satisfaction, and quality of life were not addressed.
RESULTS: Except for the UKPDS, all available studies of oral hypoglycemics are short term and are limited in focus to hemoglobin A1C. Each class of drugs achieved a similar initial reduction in hemoglobin A1C of 1% to 2% except for the AGIs and nateglinide, which were less effective. The results are remarkably consistent across studies. Head-to-head comparison of specific medications further supports this conclusion. When taken in combination, the effects on hemoglobin A1C are additive.
Despite the claims of pharmaceutical marketing, there is little difference among sulfonylureas, metformin, and thiazolidinediones in reduction of hemoglobin A1C. Each class achieves an average reduction of 1% to 2%. Alpha glucosidase inhibitors and nonsulfonylurea secretogogues are probably somewhat less efficacious; combinations of medications seem to be additive.
Clinicians should keep in mind that diet and exercise remain first-line treatment for type 2 diabetes. Initial drug therapy should be guided, however, by evidence about long-term outcomes, such as reduction in the risk of myocardial infarction, renal failure, and blindness; to date, only metformin and sulfonylureas have been shown to be beneficial in reducing microvascular complications. Only metformin has been shown to reduce macrovascular complications and all-cause mortality in obese patients with type 2 diabetes. Interestingly, this beneficial effect of metformin is totally independent of blood sugar control. Thus, metformin should be the pharmaceutical agent of first choice in the treatment of type 2 diabetes.
Which is most effective for osteoarthritis of the knee: rofecoxib, celecoxib, or acetaminophen?
ABSTRACT
BACKGROUND: Traditional nonsteroidal antiinflammatory drugs (NSAIDs) and the newer cyclooxygenase-2 enzyme (COX-2) selective inhibitors are recommended as second-line agents in patients with osteoarthritis (OA) who fail to respond to acetaminophen. This study compared the effectiveness of rofecoxib (Vioxx), celecoxib (Celebrex), and acetaminophen (Tylenol) in patients with OA of the knee.
POPULATION STUDIED: This study included 382 patients from 29 US clinical centers with symptomatic OA of the knee for 6 months or longer. All patients had been treated with NSAIDs or acetaminophen for at least 30 days before enrollment, were 40 years of age or older, and retained moderate functional mobility of the knee (American College of Rheumatology functional class I, II, or III). Baseline criteria for OA severity were determined using the Western Ontario McMaster University Osteoarthritis Index (WOMAC) and Investigator Global Assessment of Disease Status scoring. Patients were excluded if they had concurrent medical or arthritic disease or abnormal laboratory results that would have confounded the effectiveness evaluation or increased the risk of complications.
STUDY DESIGN AND VALIDITY: This research was a randomized double-blind controlled study. Allocation to treatment group (using computer-generated assignment) was concealed from enrolling investigators. After a 3-day to 7-day washout period, patients were randomized to receive 12.5 mg rofecoxib once daily, 25 mg rofecoxib once daily, 200 mg celecoxib once daily, or 1000 mg acetaminophen 4 times daily for 6 weeks. Exact matching placebos were used to maintain double-blind conditions. Response was evaluated using intent-to-treat analyses. Early effectiveness, using the WOMAC Index and Patient’s Global Assessment of Response to Therapy (PGART) questionnaires, was defined as occurring within the first 6 days. Later clinical effectiveness was evaluated during office visits using the WOMAC and PGART at weeks 2, 4, and 6.
OUTCOMES MEASURED: The primary outcomes measured were pain on walking, night pain, pain at rest, and morning stiffness (WOMAC Index) and global responses to therapy (PGART).
RESULTS: Seventy-nine percent of patients completed the 6-week follow-up. More patients treated with acetaminophen than patients treated with either rofecoxib or celecoxib discontinued early because of lack of effectiveness (17% vs 8% to 9%; composite number needed to treat for 1 withdrawal because of lack of efficacy = 8). As compared with celecoxib or acetaminophen, WOMAC response over 6 weeks showed that 25 mg rofecoxib once daily provided significantly greater responses in reduction of rest and night pain, composite pain scale, and stiffness scale. Physical function scale results were significantly better with 25 mg rofecoxib once daily than with acetaminophen but were no different from those with celecoxib. PGART response at 6 weeks also showed the best response with 25 mg rofecoxib once daily. Early response results were similar to later response results in showing that the best response was achieved with 25 mg rofecoxib once daily.
In this study, 25 mg rofecoxib once daily was more effective than either celecoxib or acetaminophen in relieving persistent pain and stiffness from knee OA. However, only 1 of 6 patients taking acetaminophen, which is inexpensive and safe, discontinued treatment for lack of efficacy. Therefore, using acetaminophen as first-line therapy is reasonable. Less expensive traditional NSAIDs (eg, ibuprofen or naproxen) have been shown to have similar effectiveness as compared with either rofecoxib or celecoxib in OA. For patients at low risk for serious NSAID-associated gastrointestinal complications, traditional NSAIDs should be the next agents of choice. For patients at high risk, COX-2 selective inhibitors are reasonable second-line agents, since they pose a lower risk of NSAID-associated gastrointestinal complications with long-term use.
ABSTRACT
BACKGROUND: Traditional nonsteroidal antiinflammatory drugs (NSAIDs) and the newer cyclooxygenase-2 enzyme (COX-2) selective inhibitors are recommended as second-line agents in patients with osteoarthritis (OA) who fail to respond to acetaminophen. This study compared the effectiveness of rofecoxib (Vioxx), celecoxib (Celebrex), and acetaminophen (Tylenol) in patients with OA of the knee.
POPULATION STUDIED: This study included 382 patients from 29 US clinical centers with symptomatic OA of the knee for 6 months or longer. All patients had been treated with NSAIDs or acetaminophen for at least 30 days before enrollment, were 40 years of age or older, and retained moderate functional mobility of the knee (American College of Rheumatology functional class I, II, or III). Baseline criteria for OA severity were determined using the Western Ontario McMaster University Osteoarthritis Index (WOMAC) and Investigator Global Assessment of Disease Status scoring. Patients were excluded if they had concurrent medical or arthritic disease or abnormal laboratory results that would have confounded the effectiveness evaluation or increased the risk of complications.
STUDY DESIGN AND VALIDITY: This research was a randomized double-blind controlled study. Allocation to treatment group (using computer-generated assignment) was concealed from enrolling investigators. After a 3-day to 7-day washout period, patients were randomized to receive 12.5 mg rofecoxib once daily, 25 mg rofecoxib once daily, 200 mg celecoxib once daily, or 1000 mg acetaminophen 4 times daily for 6 weeks. Exact matching placebos were used to maintain double-blind conditions. Response was evaluated using intent-to-treat analyses. Early effectiveness, using the WOMAC Index and Patient’s Global Assessment of Response to Therapy (PGART) questionnaires, was defined as occurring within the first 6 days. Later clinical effectiveness was evaluated during office visits using the WOMAC and PGART at weeks 2, 4, and 6.
OUTCOMES MEASURED: The primary outcomes measured were pain on walking, night pain, pain at rest, and morning stiffness (WOMAC Index) and global responses to therapy (PGART).
RESULTS: Seventy-nine percent of patients completed the 6-week follow-up. More patients treated with acetaminophen than patients treated with either rofecoxib or celecoxib discontinued early because of lack of effectiveness (17% vs 8% to 9%; composite number needed to treat for 1 withdrawal because of lack of efficacy = 8). As compared with celecoxib or acetaminophen, WOMAC response over 6 weeks showed that 25 mg rofecoxib once daily provided significantly greater responses in reduction of rest and night pain, composite pain scale, and stiffness scale. Physical function scale results were significantly better with 25 mg rofecoxib once daily than with acetaminophen but were no different from those with celecoxib. PGART response at 6 weeks also showed the best response with 25 mg rofecoxib once daily. Early response results were similar to later response results in showing that the best response was achieved with 25 mg rofecoxib once daily.
In this study, 25 mg rofecoxib once daily was more effective than either celecoxib or acetaminophen in relieving persistent pain and stiffness from knee OA. However, only 1 of 6 patients taking acetaminophen, which is inexpensive and safe, discontinued treatment for lack of efficacy. Therefore, using acetaminophen as first-line therapy is reasonable. Less expensive traditional NSAIDs (eg, ibuprofen or naproxen) have been shown to have similar effectiveness as compared with either rofecoxib or celecoxib in OA. For patients at low risk for serious NSAID-associated gastrointestinal complications, traditional NSAIDs should be the next agents of choice. For patients at high risk, COX-2 selective inhibitors are reasonable second-line agents, since they pose a lower risk of NSAID-associated gastrointestinal complications with long-term use.
ABSTRACT
BACKGROUND: Traditional nonsteroidal antiinflammatory drugs (NSAIDs) and the newer cyclooxygenase-2 enzyme (COX-2) selective inhibitors are recommended as second-line agents in patients with osteoarthritis (OA) who fail to respond to acetaminophen. This study compared the effectiveness of rofecoxib (Vioxx), celecoxib (Celebrex), and acetaminophen (Tylenol) in patients with OA of the knee.
POPULATION STUDIED: This study included 382 patients from 29 US clinical centers with symptomatic OA of the knee for 6 months or longer. All patients had been treated with NSAIDs or acetaminophen for at least 30 days before enrollment, were 40 years of age or older, and retained moderate functional mobility of the knee (American College of Rheumatology functional class I, II, or III). Baseline criteria for OA severity were determined using the Western Ontario McMaster University Osteoarthritis Index (WOMAC) and Investigator Global Assessment of Disease Status scoring. Patients were excluded if they had concurrent medical or arthritic disease or abnormal laboratory results that would have confounded the effectiveness evaluation or increased the risk of complications.
STUDY DESIGN AND VALIDITY: This research was a randomized double-blind controlled study. Allocation to treatment group (using computer-generated assignment) was concealed from enrolling investigators. After a 3-day to 7-day washout period, patients were randomized to receive 12.5 mg rofecoxib once daily, 25 mg rofecoxib once daily, 200 mg celecoxib once daily, or 1000 mg acetaminophen 4 times daily for 6 weeks. Exact matching placebos were used to maintain double-blind conditions. Response was evaluated using intent-to-treat analyses. Early effectiveness, using the WOMAC Index and Patient’s Global Assessment of Response to Therapy (PGART) questionnaires, was defined as occurring within the first 6 days. Later clinical effectiveness was evaluated during office visits using the WOMAC and PGART at weeks 2, 4, and 6.
OUTCOMES MEASURED: The primary outcomes measured were pain on walking, night pain, pain at rest, and morning stiffness (WOMAC Index) and global responses to therapy (PGART).
RESULTS: Seventy-nine percent of patients completed the 6-week follow-up. More patients treated with acetaminophen than patients treated with either rofecoxib or celecoxib discontinued early because of lack of effectiveness (17% vs 8% to 9%; composite number needed to treat for 1 withdrawal because of lack of efficacy = 8). As compared with celecoxib or acetaminophen, WOMAC response over 6 weeks showed that 25 mg rofecoxib once daily provided significantly greater responses in reduction of rest and night pain, composite pain scale, and stiffness scale. Physical function scale results were significantly better with 25 mg rofecoxib once daily than with acetaminophen but were no different from those with celecoxib. PGART response at 6 weeks also showed the best response with 25 mg rofecoxib once daily. Early response results were similar to later response results in showing that the best response was achieved with 25 mg rofecoxib once daily.
In this study, 25 mg rofecoxib once daily was more effective than either celecoxib or acetaminophen in relieving persistent pain and stiffness from knee OA. However, only 1 of 6 patients taking acetaminophen, which is inexpensive and safe, discontinued treatment for lack of efficacy. Therefore, using acetaminophen as first-line therapy is reasonable. Less expensive traditional NSAIDs (eg, ibuprofen or naproxen) have been shown to have similar effectiveness as compared with either rofecoxib or celecoxib in OA. For patients at low risk for serious NSAID-associated gastrointestinal complications, traditional NSAIDs should be the next agents of choice. For patients at high risk, COX-2 selective inhibitors are reasonable second-line agents, since they pose a lower risk of NSAID-associated gastrointestinal complications with long-term use.
What is the relative cardiovascular benefit of lowering cholesterol, blood pressure, and glucose levels in patients with type 2 diabetes?
ABSTRACT
BACKGROUND: Type 2 diabetes is increasingly recognized as a powerful risk factor for coronary artery disease (CAD) events. In its recommendations for treating cholesterol levels, the Third Adult Treatment Panel of the National Cholesterol Education Program (NCEP) considers diabetes mellitus the equivalent of preexisting CAD.1 The United Kingdom Prospective Diabetes Study (UKPDS) showed that blood pressure control had a greater overall effect on diabetes-related morbidity and mortality than did intensive glucose control.2 The study under consideration examines data from the major trials of cardiovascular risk reduction to determine the relative benefit of controlling blood pressure and cholesterol and glucose levels in patients with type 2 diabetes.
POPULATION STUDIED: Adult patients with diabetes who participated in a variety of studies looking at reduction of risk factors for CAD.
STUDY DESIGN AND VALIDITY: This meta-analysis combined data from previous studies of intensive coronary risk factor reduction in patients with diabetes. The authors searched MEDLINE from 1966 to 2001 for articles published on the topic in English. Studies were included if they were randomized controlled trials of adults that included some patients with diabetes, compared intensive risk factor reduction with drug therapy versus either placebo or routine care, had at least 1 year of follow-up, and reported the requisite cardiovascular outcomes. The studies were independently reviewed by 2 authors for inclusion in the analysis based on these inclusion criteria; disagreement was resolved by consensus. There was no explicit validity assessment of the articles. Data were abstracted in a structured manner. The results were analyzed for heterogeneity and pooled appropriately.
OUTCOMES MEASURED: The outcomes measured included “aggregate cardiac events” (CAD death and nonfatal myocardial infarction [MI]), cardiovascular mortality, MI, and stroke. The results are presented in changes in rates over person-years and as person-years needed to treat. This was done to account for the variable lengths of patient follow-up in these large trials; these findings can be interpreted similarly to standard event rates and numbers needed to treat (NNT). One caveat is that to report an outcome for cholesterol lowering and blood pressure control across a time span of only 1 person-year is artificial, given that most changes in outcomes produced by these therapies take several years to manifest themselves.
RESULTS: Cholesterol lowering (a total of 5 studies of both primary and secondary prevention) reduced aggregate cardiac events (30 vs 41 events per 1000 person-years, NNT for 1 year 106, 95% confidence interval [CI} 62-366). Cholesterol lowering as secondary prevention contributed most to this result (3 trials, 34 vs 44 events per 1000 person-years, NNT for 1 year 120, 95% CI, 61-4856); the results of primary prevention through cholesterol lowering did not reach statistical significance. Blood pressure reduction also reduced aggregate cardiac events (17 vs 23 per 1000 person-years, NNT for 1 year 157, 95% CI, 88-726). Two trials of blood glucose reduction as primary prevention failed to show a significant difference in aggregate cardiac events. The individual cardiac outcomes (cardiovascular mortality and MI each alone) showed results consistent with the aggregate outcomes.
This study reinforces the conclusions of the UKPDS study and the recommendations of the NCEP guidelines that aggressive management of cholesterol and blood pressure in patients with diabetes is essential in preventing CAD. Intensive control of blood sugar levels does not seem to alter CAD events or mortality.
ABSTRACT
BACKGROUND: Type 2 diabetes is increasingly recognized as a powerful risk factor for coronary artery disease (CAD) events. In its recommendations for treating cholesterol levels, the Third Adult Treatment Panel of the National Cholesterol Education Program (NCEP) considers diabetes mellitus the equivalent of preexisting CAD.1 The United Kingdom Prospective Diabetes Study (UKPDS) showed that blood pressure control had a greater overall effect on diabetes-related morbidity and mortality than did intensive glucose control.2 The study under consideration examines data from the major trials of cardiovascular risk reduction to determine the relative benefit of controlling blood pressure and cholesterol and glucose levels in patients with type 2 diabetes.
POPULATION STUDIED: Adult patients with diabetes who participated in a variety of studies looking at reduction of risk factors for CAD.
STUDY DESIGN AND VALIDITY: This meta-analysis combined data from previous studies of intensive coronary risk factor reduction in patients with diabetes. The authors searched MEDLINE from 1966 to 2001 for articles published on the topic in English. Studies were included if they were randomized controlled trials of adults that included some patients with diabetes, compared intensive risk factor reduction with drug therapy versus either placebo or routine care, had at least 1 year of follow-up, and reported the requisite cardiovascular outcomes. The studies were independently reviewed by 2 authors for inclusion in the analysis based on these inclusion criteria; disagreement was resolved by consensus. There was no explicit validity assessment of the articles. Data were abstracted in a structured manner. The results were analyzed for heterogeneity and pooled appropriately.
OUTCOMES MEASURED: The outcomes measured included “aggregate cardiac events” (CAD death and nonfatal myocardial infarction [MI]), cardiovascular mortality, MI, and stroke. The results are presented in changes in rates over person-years and as person-years needed to treat. This was done to account for the variable lengths of patient follow-up in these large trials; these findings can be interpreted similarly to standard event rates and numbers needed to treat (NNT). One caveat is that to report an outcome for cholesterol lowering and blood pressure control across a time span of only 1 person-year is artificial, given that most changes in outcomes produced by these therapies take several years to manifest themselves.
RESULTS: Cholesterol lowering (a total of 5 studies of both primary and secondary prevention) reduced aggregate cardiac events (30 vs 41 events per 1000 person-years, NNT for 1 year 106, 95% confidence interval [CI} 62-366). Cholesterol lowering as secondary prevention contributed most to this result (3 trials, 34 vs 44 events per 1000 person-years, NNT for 1 year 120, 95% CI, 61-4856); the results of primary prevention through cholesterol lowering did not reach statistical significance. Blood pressure reduction also reduced aggregate cardiac events (17 vs 23 per 1000 person-years, NNT for 1 year 157, 95% CI, 88-726). Two trials of blood glucose reduction as primary prevention failed to show a significant difference in aggregate cardiac events. The individual cardiac outcomes (cardiovascular mortality and MI each alone) showed results consistent with the aggregate outcomes.
This study reinforces the conclusions of the UKPDS study and the recommendations of the NCEP guidelines that aggressive management of cholesterol and blood pressure in patients with diabetes is essential in preventing CAD. Intensive control of blood sugar levels does not seem to alter CAD events or mortality.
ABSTRACT
BACKGROUND: Type 2 diabetes is increasingly recognized as a powerful risk factor for coronary artery disease (CAD) events. In its recommendations for treating cholesterol levels, the Third Adult Treatment Panel of the National Cholesterol Education Program (NCEP) considers diabetes mellitus the equivalent of preexisting CAD.1 The United Kingdom Prospective Diabetes Study (UKPDS) showed that blood pressure control had a greater overall effect on diabetes-related morbidity and mortality than did intensive glucose control.2 The study under consideration examines data from the major trials of cardiovascular risk reduction to determine the relative benefit of controlling blood pressure and cholesterol and glucose levels in patients with type 2 diabetes.
POPULATION STUDIED: Adult patients with diabetes who participated in a variety of studies looking at reduction of risk factors for CAD.
STUDY DESIGN AND VALIDITY: This meta-analysis combined data from previous studies of intensive coronary risk factor reduction in patients with diabetes. The authors searched MEDLINE from 1966 to 2001 for articles published on the topic in English. Studies were included if they were randomized controlled trials of adults that included some patients with diabetes, compared intensive risk factor reduction with drug therapy versus either placebo or routine care, had at least 1 year of follow-up, and reported the requisite cardiovascular outcomes. The studies were independently reviewed by 2 authors for inclusion in the analysis based on these inclusion criteria; disagreement was resolved by consensus. There was no explicit validity assessment of the articles. Data were abstracted in a structured manner. The results were analyzed for heterogeneity and pooled appropriately.
OUTCOMES MEASURED: The outcomes measured included “aggregate cardiac events” (CAD death and nonfatal myocardial infarction [MI]), cardiovascular mortality, MI, and stroke. The results are presented in changes in rates over person-years and as person-years needed to treat. This was done to account for the variable lengths of patient follow-up in these large trials; these findings can be interpreted similarly to standard event rates and numbers needed to treat (NNT). One caveat is that to report an outcome for cholesterol lowering and blood pressure control across a time span of only 1 person-year is artificial, given that most changes in outcomes produced by these therapies take several years to manifest themselves.
RESULTS: Cholesterol lowering (a total of 5 studies of both primary and secondary prevention) reduced aggregate cardiac events (30 vs 41 events per 1000 person-years, NNT for 1 year 106, 95% confidence interval [CI} 62-366). Cholesterol lowering as secondary prevention contributed most to this result (3 trials, 34 vs 44 events per 1000 person-years, NNT for 1 year 120, 95% CI, 61-4856); the results of primary prevention through cholesterol lowering did not reach statistical significance. Blood pressure reduction also reduced aggregate cardiac events (17 vs 23 per 1000 person-years, NNT for 1 year 157, 95% CI, 88-726). Two trials of blood glucose reduction as primary prevention failed to show a significant difference in aggregate cardiac events. The individual cardiac outcomes (cardiovascular mortality and MI each alone) showed results consistent with the aggregate outcomes.
This study reinforces the conclusions of the UKPDS study and the recommendations of the NCEP guidelines that aggressive management of cholesterol and blood pressure in patients with diabetes is essential in preventing CAD. Intensive control of blood sugar levels does not seem to alter CAD events or mortality.
What is the best diet to prevent recurrent calcium oxalate stones in patients with idiopathic hypercalciuria?
ABSTRACT
BACKGROUND: About 10% of people in the United States develop at least 1 symptomatic kidney stone during their lives. The recurrence rate after 10 years is at least 50%. Many physicians recommend a low-calcium diet in patients with calcium oxalate stones to prevent recurrence. Recent studies suggest that a low-calcium diet may not be effective and that intake of animal protein and salt may influence renal calcium excretion. This study compares the traditional low-calcium diet with a diet that is low in animal protein and salt.
POPULATION STUDIED: This study enrolled 120 men with idiopathic hypercalciuria (urinary calcium excretion of more than 300 mg per day on an unrestricted diet) who had been referred to a nephrology clinic in Parma, Italy, and who had had at least 2 episodes of symptomatic renal stones. Reasons for exclusion included previous visits to any “stone disease center” and conditions associated with calcium stones, such as hyperparathyroidism or inflammatory bowel disease.
STUDY DESIGN AND VALIDITY: The investigators randomly assigned subjects, using concealed allocation, to 1 of 2 diets in this randomized controlled study. The low-calcium diet limited calcium intake to about 400 mg per day. The other diet, which included about 1200 mg per day of calcium, limited sodium chloride to about 3000 mg and animal protein to 93 g (15% of total calories). Both groups were advised to limit intake of high-oxalate foods and encouraged to drink 2 liters of water per day in cold weather and 3 liters in warm weather. Subjects were allowed moderate consumption of beer, wine, coffee, and sodas. (Detailed dietary instructions are available to New England Journal of Medicine subscribers in the supplement to the publication at www.nejm.org.) The study followed the patients for 5 years or until they developed clinical or radiologic evidence of a renal stone. Annual x-ray and ultrasound studies identified asymptomatic stone recurrences.
OUTCOMES MEASURED: The primary outcome was the time to development of the first recurrence of a renal stone, whether or not it was clinically evident. Other outcomes included changes in calcium and oxalate excretion and calcium oxalate saturation in the urine.
RESULTS: After 5 years, the low-protein, low-sodium diet led to fewer recurrences (20% compared with 38% in the low-calcium group, relative risk 0.49, number needed to treat with diet for 5 years = 5.5). The risk of recurrence in the low-calcium group was similar to the 35% to 40% expected in the absence of any intervention. The disease-oriented changes in urine characteristics were predictable: urinary calcium decreased in both groups, but oxalate secretion increased in the low-calcium group, causing greater calcium oxalate saturation.
A low-protein, low-sodium, high-calcium diet reduces the risk of recurrent renal stones in men with idiopathic hypercalciuria. This diet seems fairly palatable; compliance in the study was generally good. The traditionally recommended low-calcium diet does not appear to prevent further renal stones.
ABSTRACT
BACKGROUND: About 10% of people in the United States develop at least 1 symptomatic kidney stone during their lives. The recurrence rate after 10 years is at least 50%. Many physicians recommend a low-calcium diet in patients with calcium oxalate stones to prevent recurrence. Recent studies suggest that a low-calcium diet may not be effective and that intake of animal protein and salt may influence renal calcium excretion. This study compares the traditional low-calcium diet with a diet that is low in animal protein and salt.
POPULATION STUDIED: This study enrolled 120 men with idiopathic hypercalciuria (urinary calcium excretion of more than 300 mg per day on an unrestricted diet) who had been referred to a nephrology clinic in Parma, Italy, and who had had at least 2 episodes of symptomatic renal stones. Reasons for exclusion included previous visits to any “stone disease center” and conditions associated with calcium stones, such as hyperparathyroidism or inflammatory bowel disease.
STUDY DESIGN AND VALIDITY: The investigators randomly assigned subjects, using concealed allocation, to 1 of 2 diets in this randomized controlled study. The low-calcium diet limited calcium intake to about 400 mg per day. The other diet, which included about 1200 mg per day of calcium, limited sodium chloride to about 3000 mg and animal protein to 93 g (15% of total calories). Both groups were advised to limit intake of high-oxalate foods and encouraged to drink 2 liters of water per day in cold weather and 3 liters in warm weather. Subjects were allowed moderate consumption of beer, wine, coffee, and sodas. (Detailed dietary instructions are available to New England Journal of Medicine subscribers in the supplement to the publication at www.nejm.org.) The study followed the patients for 5 years or until they developed clinical or radiologic evidence of a renal stone. Annual x-ray and ultrasound studies identified asymptomatic stone recurrences.
OUTCOMES MEASURED: The primary outcome was the time to development of the first recurrence of a renal stone, whether or not it was clinically evident. Other outcomes included changes in calcium and oxalate excretion and calcium oxalate saturation in the urine.
RESULTS: After 5 years, the low-protein, low-sodium diet led to fewer recurrences (20% compared with 38% in the low-calcium group, relative risk 0.49, number needed to treat with diet for 5 years = 5.5). The risk of recurrence in the low-calcium group was similar to the 35% to 40% expected in the absence of any intervention. The disease-oriented changes in urine characteristics were predictable: urinary calcium decreased in both groups, but oxalate secretion increased in the low-calcium group, causing greater calcium oxalate saturation.
A low-protein, low-sodium, high-calcium diet reduces the risk of recurrent renal stones in men with idiopathic hypercalciuria. This diet seems fairly palatable; compliance in the study was generally good. The traditionally recommended low-calcium diet does not appear to prevent further renal stones.
ABSTRACT
BACKGROUND: About 10% of people in the United States develop at least 1 symptomatic kidney stone during their lives. The recurrence rate after 10 years is at least 50%. Many physicians recommend a low-calcium diet in patients with calcium oxalate stones to prevent recurrence. Recent studies suggest that a low-calcium diet may not be effective and that intake of animal protein and salt may influence renal calcium excretion. This study compares the traditional low-calcium diet with a diet that is low in animal protein and salt.
POPULATION STUDIED: This study enrolled 120 men with idiopathic hypercalciuria (urinary calcium excretion of more than 300 mg per day on an unrestricted diet) who had been referred to a nephrology clinic in Parma, Italy, and who had had at least 2 episodes of symptomatic renal stones. Reasons for exclusion included previous visits to any “stone disease center” and conditions associated with calcium stones, such as hyperparathyroidism or inflammatory bowel disease.
STUDY DESIGN AND VALIDITY: The investigators randomly assigned subjects, using concealed allocation, to 1 of 2 diets in this randomized controlled study. The low-calcium diet limited calcium intake to about 400 mg per day. The other diet, which included about 1200 mg per day of calcium, limited sodium chloride to about 3000 mg and animal protein to 93 g (15% of total calories). Both groups were advised to limit intake of high-oxalate foods and encouraged to drink 2 liters of water per day in cold weather and 3 liters in warm weather. Subjects were allowed moderate consumption of beer, wine, coffee, and sodas. (Detailed dietary instructions are available to New England Journal of Medicine subscribers in the supplement to the publication at www.nejm.org.) The study followed the patients for 5 years or until they developed clinical or radiologic evidence of a renal stone. Annual x-ray and ultrasound studies identified asymptomatic stone recurrences.
OUTCOMES MEASURED: The primary outcome was the time to development of the first recurrence of a renal stone, whether or not it was clinically evident. Other outcomes included changes in calcium and oxalate excretion and calcium oxalate saturation in the urine.
RESULTS: After 5 years, the low-protein, low-sodium diet led to fewer recurrences (20% compared with 38% in the low-calcium group, relative risk 0.49, number needed to treat with diet for 5 years = 5.5). The risk of recurrence in the low-calcium group was similar to the 35% to 40% expected in the absence of any intervention. The disease-oriented changes in urine characteristics were predictable: urinary calcium decreased in both groups, but oxalate secretion increased in the low-calcium group, causing greater calcium oxalate saturation.
A low-protein, low-sodium, high-calcium diet reduces the risk of recurrent renal stones in men with idiopathic hypercalciuria. This diet seems fairly palatable; compliance in the study was generally good. The traditionally recommended low-calcium diet does not appear to prevent further renal stones.
Is splinting of distal radius torus fractures an acceptable alternative to casting?
ABSTRACT
BACKGROUND: Torus fractures of the distal radius are common; recommendations for management are diverse. The investigators conducted a survey of orthopedic surgeons to determine typical management of these fractures. The authors also conducted a randomized trial to compare treatment with either plaster casting or immobilization splinting.
POPULATION STUDIED: First, the investigators surveyed 104 pediatric orthopedic surgeons in Great Britain. Second, they conducted a randomized prospective study of 201 children aged 2 to15 years with distal radius torus fractures. A total of 22 patients was lost to follow-up, 4 in the cast group and 18 in the splint group, leaving 179 in the study.
STUDY DESIGN AND VALIDITY: Three studies were included in this article. The postal questionnaire was sent to 104 pediatric orthopedic surgeons. The questionnaire determined the incidence of torus fractures and the typical method of treatment by the individual practitioners. Clinic verses emergency department (ED) evaluation was considered, as was the prevalence of subsequent visits with and without additional radiologic studies. Only 65 (62.5%) of the questionnaires were returned and analyzed.
OUTCOMES MEASURED: The postal questionnaire measured incidence and treatment approach for torus fractures of the distal radius. The prospective randomized trial measured clinical and radiographic outcomes for plaster casting versus splinting treatment. Additionally, compliance with treatment assignment was assessed. Cost-benefit analysis compared the total costs of plaster casting versus splinting.
RESULTS: The questionnaire revealed that each orthopedist treated 5.1 (SD ± 4.8) torus fractures each week. For treatment that occurred in the ED, 64 physicians used some form of casting for treatment and 1 used a splint. When treatment took place in the office, however, 60 (92.3%) physicians used some form of casting and 5 (7.7%) used wrist splints. The fractures were immobilized for a mean of 2.9 (SD ± 0.64) (1 to 4) weeks. Eleven (16.9%) consultants routinely x-rayed the site at the end of treatment.
This study showed that treating torus fractures of the distal radius with casting versus splinting has no clinical difference in outcome. Some cost saving seems to occur when torus fractures are treated with splinting rather than casting, since splinting obviates a follow-up visit for cast removal. After reading this study, we agree that Futura splinting of distal radial torus fracture for 3 weeks appears to be a reasonable alternative to casting. The absence of complications in both groups suggests that a follow-up visit and confirmatory radiologic imaging may not be necessary.
ABSTRACT
BACKGROUND: Torus fractures of the distal radius are common; recommendations for management are diverse. The investigators conducted a survey of orthopedic surgeons to determine typical management of these fractures. The authors also conducted a randomized trial to compare treatment with either plaster casting or immobilization splinting.
POPULATION STUDIED: First, the investigators surveyed 104 pediatric orthopedic surgeons in Great Britain. Second, they conducted a randomized prospective study of 201 children aged 2 to15 years with distal radius torus fractures. A total of 22 patients was lost to follow-up, 4 in the cast group and 18 in the splint group, leaving 179 in the study.
STUDY DESIGN AND VALIDITY: Three studies were included in this article. The postal questionnaire was sent to 104 pediatric orthopedic surgeons. The questionnaire determined the incidence of torus fractures and the typical method of treatment by the individual practitioners. Clinic verses emergency department (ED) evaluation was considered, as was the prevalence of subsequent visits with and without additional radiologic studies. Only 65 (62.5%) of the questionnaires were returned and analyzed.
OUTCOMES MEASURED: The postal questionnaire measured incidence and treatment approach for torus fractures of the distal radius. The prospective randomized trial measured clinical and radiographic outcomes for plaster casting versus splinting treatment. Additionally, compliance with treatment assignment was assessed. Cost-benefit analysis compared the total costs of plaster casting versus splinting.
RESULTS: The questionnaire revealed that each orthopedist treated 5.1 (SD ± 4.8) torus fractures each week. For treatment that occurred in the ED, 64 physicians used some form of casting for treatment and 1 used a splint. When treatment took place in the office, however, 60 (92.3%) physicians used some form of casting and 5 (7.7%) used wrist splints. The fractures were immobilized for a mean of 2.9 (SD ± 0.64) (1 to 4) weeks. Eleven (16.9%) consultants routinely x-rayed the site at the end of treatment.
This study showed that treating torus fractures of the distal radius with casting versus splinting has no clinical difference in outcome. Some cost saving seems to occur when torus fractures are treated with splinting rather than casting, since splinting obviates a follow-up visit for cast removal. After reading this study, we agree that Futura splinting of distal radial torus fracture for 3 weeks appears to be a reasonable alternative to casting. The absence of complications in both groups suggests that a follow-up visit and confirmatory radiologic imaging may not be necessary.
ABSTRACT
BACKGROUND: Torus fractures of the distal radius are common; recommendations for management are diverse. The investigators conducted a survey of orthopedic surgeons to determine typical management of these fractures. The authors also conducted a randomized trial to compare treatment with either plaster casting or immobilization splinting.
POPULATION STUDIED: First, the investigators surveyed 104 pediatric orthopedic surgeons in Great Britain. Second, they conducted a randomized prospective study of 201 children aged 2 to15 years with distal radius torus fractures. A total of 22 patients was lost to follow-up, 4 in the cast group and 18 in the splint group, leaving 179 in the study.
STUDY DESIGN AND VALIDITY: Three studies were included in this article. The postal questionnaire was sent to 104 pediatric orthopedic surgeons. The questionnaire determined the incidence of torus fractures and the typical method of treatment by the individual practitioners. Clinic verses emergency department (ED) evaluation was considered, as was the prevalence of subsequent visits with and without additional radiologic studies. Only 65 (62.5%) of the questionnaires were returned and analyzed.
OUTCOMES MEASURED: The postal questionnaire measured incidence and treatment approach for torus fractures of the distal radius. The prospective randomized trial measured clinical and radiographic outcomes for plaster casting versus splinting treatment. Additionally, compliance with treatment assignment was assessed. Cost-benefit analysis compared the total costs of plaster casting versus splinting.
RESULTS: The questionnaire revealed that each orthopedist treated 5.1 (SD ± 4.8) torus fractures each week. For treatment that occurred in the ED, 64 physicians used some form of casting for treatment and 1 used a splint. When treatment took place in the office, however, 60 (92.3%) physicians used some form of casting and 5 (7.7%) used wrist splints. The fractures were immobilized for a mean of 2.9 (SD ± 0.64) (1 to 4) weeks. Eleven (16.9%) consultants routinely x-rayed the site at the end of treatment.
This study showed that treating torus fractures of the distal radius with casting versus splinting has no clinical difference in outcome. Some cost saving seems to occur when torus fractures are treated with splinting rather than casting, since splinting obviates a follow-up visit for cast removal. After reading this study, we agree that Futura splinting of distal radial torus fracture for 3 weeks appears to be a reasonable alternative to casting. The absence of complications in both groups suggests that a follow-up visit and confirmatory radiologic imaging may not be necessary.
Are paroxetine, fluoxetine, and sertraline equally effective for depression?
ABSTRACT
BACKGROUND: Although selective serotonin reuptake inhibitors (SSRIs) are the most commonly prescribed antidepressants, data comparing the effectiveness of the members of this class of antidepressants are limited. This study compared the effectiveness of 3 SSRIs in a naturalistic study designed to mimic typical primary care prescribing.
POPULATION STUDIED: Adult outpatients from 2 primary care research networks were eligible for the study if their primary care doctor had diagnosed a depressive disorder requiring medication. Patients were excluded if they were cognitively impaired, terminally ill, or suicidal; lived in a nursing home; were currently taking a non-SSRI antidepressant; or had recently taken an SSRI antidepressant. Data were analyzed from 546 patients (79% of those invited to participate), who were randomized and completed at least 1 follow-up interview.
STUDY DESIGN AND VALIDITY: This was a randomized, controlled, unblinded trial designed to reflect actual primary care practice. After being diagnosed by their primary care physician (PCP) with clinical depression, with the PCP using his or her usual methods to make the diagnosis, patients were randomized through a concealed allocation procedure to receive daily doses of 20 mg paroxetine (Paxil), 20 mg fluoxetine (Prozac), or 50 mg sertraline (Zoloft). Both the patients and doctors were aware of the medication assignment. The PCP could adjust the dose to clinical response or change patients to a different medication. By the end of the study, less than half of the patients were taking the medication they had originally started.
OUTCOMES MEASURED: The primary outcome was change in the Mental Component Score (MCS) of the Medical Outcomes Study 36-Item Short Form Health Survey (SF-36). The scoring of the MCS incorporates elements of the 8 subscales of the SF-36 and ranges from 0 to 100, with higher scores representing better mental health. Several other measures of depression and social functioning provided secondary outcomes.
RESULTS: Forty-one percent to 50% of participants stopped their initially assigned medication during the 9-month follow-up period. About 20% of participants switched to another antidepressant. Roughly 25% stopped taking antidepressants altogether before completion of the follow-up period.
This well-designed study of SSRI treatment for clinical depression in primary care settings found that paroxetine (Paxil), fluoxetine (Prozac), and sertraline (Zoloft) were equally effective for the treatment of depression. Additionally, since the rates of adherence and of adverse effects were similar among the 3 study medications, physicians should feel equally confident prescribing any of these SSRIs. Using the lowest-cost SSRI (fluoxetine just became available generically) is an ethical and reasonable approach.
ABSTRACT
BACKGROUND: Although selective serotonin reuptake inhibitors (SSRIs) are the most commonly prescribed antidepressants, data comparing the effectiveness of the members of this class of antidepressants are limited. This study compared the effectiveness of 3 SSRIs in a naturalistic study designed to mimic typical primary care prescribing.
POPULATION STUDIED: Adult outpatients from 2 primary care research networks were eligible for the study if their primary care doctor had diagnosed a depressive disorder requiring medication. Patients were excluded if they were cognitively impaired, terminally ill, or suicidal; lived in a nursing home; were currently taking a non-SSRI antidepressant; or had recently taken an SSRI antidepressant. Data were analyzed from 546 patients (79% of those invited to participate), who were randomized and completed at least 1 follow-up interview.
STUDY DESIGN AND VALIDITY: This was a randomized, controlled, unblinded trial designed to reflect actual primary care practice. After being diagnosed by their primary care physician (PCP) with clinical depression, with the PCP using his or her usual methods to make the diagnosis, patients were randomized through a concealed allocation procedure to receive daily doses of 20 mg paroxetine (Paxil), 20 mg fluoxetine (Prozac), or 50 mg sertraline (Zoloft). Both the patients and doctors were aware of the medication assignment. The PCP could adjust the dose to clinical response or change patients to a different medication. By the end of the study, less than half of the patients were taking the medication they had originally started.
OUTCOMES MEASURED: The primary outcome was change in the Mental Component Score (MCS) of the Medical Outcomes Study 36-Item Short Form Health Survey (SF-36). The scoring of the MCS incorporates elements of the 8 subscales of the SF-36 and ranges from 0 to 100, with higher scores representing better mental health. Several other measures of depression and social functioning provided secondary outcomes.
RESULTS: Forty-one percent to 50% of participants stopped their initially assigned medication during the 9-month follow-up period. About 20% of participants switched to another antidepressant. Roughly 25% stopped taking antidepressants altogether before completion of the follow-up period.
This well-designed study of SSRI treatment for clinical depression in primary care settings found that paroxetine (Paxil), fluoxetine (Prozac), and sertraline (Zoloft) were equally effective for the treatment of depression. Additionally, since the rates of adherence and of adverse effects were similar among the 3 study medications, physicians should feel equally confident prescribing any of these SSRIs. Using the lowest-cost SSRI (fluoxetine just became available generically) is an ethical and reasonable approach.
ABSTRACT
BACKGROUND: Although selective serotonin reuptake inhibitors (SSRIs) are the most commonly prescribed antidepressants, data comparing the effectiveness of the members of this class of antidepressants are limited. This study compared the effectiveness of 3 SSRIs in a naturalistic study designed to mimic typical primary care prescribing.
POPULATION STUDIED: Adult outpatients from 2 primary care research networks were eligible for the study if their primary care doctor had diagnosed a depressive disorder requiring medication. Patients were excluded if they were cognitively impaired, terminally ill, or suicidal; lived in a nursing home; were currently taking a non-SSRI antidepressant; or had recently taken an SSRI antidepressant. Data were analyzed from 546 patients (79% of those invited to participate), who were randomized and completed at least 1 follow-up interview.
STUDY DESIGN AND VALIDITY: This was a randomized, controlled, unblinded trial designed to reflect actual primary care practice. After being diagnosed by their primary care physician (PCP) with clinical depression, with the PCP using his or her usual methods to make the diagnosis, patients were randomized through a concealed allocation procedure to receive daily doses of 20 mg paroxetine (Paxil), 20 mg fluoxetine (Prozac), or 50 mg sertraline (Zoloft). Both the patients and doctors were aware of the medication assignment. The PCP could adjust the dose to clinical response or change patients to a different medication. By the end of the study, less than half of the patients were taking the medication they had originally started.
OUTCOMES MEASURED: The primary outcome was change in the Mental Component Score (MCS) of the Medical Outcomes Study 36-Item Short Form Health Survey (SF-36). The scoring of the MCS incorporates elements of the 8 subscales of the SF-36 and ranges from 0 to 100, with higher scores representing better mental health. Several other measures of depression and social functioning provided secondary outcomes.
RESULTS: Forty-one percent to 50% of participants stopped their initially assigned medication during the 9-month follow-up period. About 20% of participants switched to another antidepressant. Roughly 25% stopped taking antidepressants altogether before completion of the follow-up period.
This well-designed study of SSRI treatment for clinical depression in primary care settings found that paroxetine (Paxil), fluoxetine (Prozac), and sertraline (Zoloft) were equally effective for the treatment of depression. Additionally, since the rates of adherence and of adverse effects were similar among the 3 study medications, physicians should feel equally confident prescribing any of these SSRIs. Using the lowest-cost SSRI (fluoxetine just became available generically) is an ethical and reasonable approach.
Which is more effective for as-needed treatment of seasonal allergy symptoms: intranasal corticosteroids or oral antihistamines?
ABSTRACT
BACKGROUND: Symptoms resulting from early response to allergen exposure are histamine mediated, last a few minutes, and often cue patients to take medication. Hours later, the late response begins and typically leads to symptoms of congestion. The late-phase response is not histamine mediated; other studies have shown intranasal corticosteroids to inhibit the response. The researchers tested the hypothesis that intranasal steroids may be as beneficial as or superior to antihistamines for as-needed use because of their effect on the late response to environmental allergens.
POPULATION STUDIED: The 88 subjects, aged 18 to 48 years, had fall seasonal rhinitis for at least 2 ragweed seasons before enrollment and had a positive puncture skin test to ragweed antigen extract. The population was 52% male, 60% white and in general good health. Patients were excluded for nasal polyps, displaced septum, perennial rhinitis, and signs or symptoms of renal, hepatic, or cardiovascular disease. Patients were also excluded if they had received immunotherapy within 2 years before enrollment or had taken topical or systemic steroids, antihistamines, decongestants, or cromolyn sodium within 2 weeks before enrollment.
STUDY DESIGN AND VALIDITY: This is a randomized unblinded study. Patients were enrolled before or during the early part of the ragweed season. They were randomized to receive 100 μg/day fluticasone propionate per nostril or 10 mg loratadine once daily as needed for 4 weeks. Nasal lavage for eosinophil count and eosinophil cationic protein (ECP) and completion of the Rhinoconjunctivitis Quality of Life Questionnaire (RQLQ, a validated instrument) were performed initially, at 2 weeks, and at 4 weeks. Patients were instructed to record medication usage and symptom severity in a diary twice daily. Itchy eyes and 3 symptoms for each nostril (rhinorrhea, nasal congestion, and sneezing) were rated on a scale of 0 to 3, ranging from 0 = no symptoms to 3 = severe symptoms.
OUTCOMES MEASURED: The RQLQ score was the primary outcome. The symptom diary scores were evaluated by symptom; a total symptom score was calculated. Other outcomes included nasal lavage eosinophil count and ECP levels.
RESULTS: Patients used medication an average of 17 of 28 days in the fluticasone group, similar to the average of 18 of 28 days in the loratadine group. The RQLQ scores were similar in the 2 groups initially. Significant improvement in the fluticasone group over the loratadine group was seen at the second and third visits in the overall score and activity, sleep, practical, and nasal domains of the RQLQ (P < .05). Symptom diaries showed a median score of 7.0 out of 21 for the loratadine-treated group and 4.0 out of 21 for the steroid-treated group (P = .005). Eosinophil count and ECP showed significant decreases in the steroid group.
This study shows that for as-needed treatment of allergic rhinitis, fluticasone propionate appears to be superior to loratadine in both subjective and objective measurements. A double-blind design would have strengthened our confidence in these results. Regular use of intranasal steroids has also been demonstrated to provide better symptom control than antihistamines do. The clinician may consider prescribing as-needed antihistamines or intranasal steroids for first-line treatment of allergic rhinitis.
ABSTRACT
BACKGROUND: Symptoms resulting from early response to allergen exposure are histamine mediated, last a few minutes, and often cue patients to take medication. Hours later, the late response begins and typically leads to symptoms of congestion. The late-phase response is not histamine mediated; other studies have shown intranasal corticosteroids to inhibit the response. The researchers tested the hypothesis that intranasal steroids may be as beneficial as or superior to antihistamines for as-needed use because of their effect on the late response to environmental allergens.
POPULATION STUDIED: The 88 subjects, aged 18 to 48 years, had fall seasonal rhinitis for at least 2 ragweed seasons before enrollment and had a positive puncture skin test to ragweed antigen extract. The population was 52% male, 60% white and in general good health. Patients were excluded for nasal polyps, displaced septum, perennial rhinitis, and signs or symptoms of renal, hepatic, or cardiovascular disease. Patients were also excluded if they had received immunotherapy within 2 years before enrollment or had taken topical or systemic steroids, antihistamines, decongestants, or cromolyn sodium within 2 weeks before enrollment.
STUDY DESIGN AND VALIDITY: This is a randomized unblinded study. Patients were enrolled before or during the early part of the ragweed season. They were randomized to receive 100 μg/day fluticasone propionate per nostril or 10 mg loratadine once daily as needed for 4 weeks. Nasal lavage for eosinophil count and eosinophil cationic protein (ECP) and completion of the Rhinoconjunctivitis Quality of Life Questionnaire (RQLQ, a validated instrument) were performed initially, at 2 weeks, and at 4 weeks. Patients were instructed to record medication usage and symptom severity in a diary twice daily. Itchy eyes and 3 symptoms for each nostril (rhinorrhea, nasal congestion, and sneezing) were rated on a scale of 0 to 3, ranging from 0 = no symptoms to 3 = severe symptoms.
OUTCOMES MEASURED: The RQLQ score was the primary outcome. The symptom diary scores were evaluated by symptom; a total symptom score was calculated. Other outcomes included nasal lavage eosinophil count and ECP levels.
RESULTS: Patients used medication an average of 17 of 28 days in the fluticasone group, similar to the average of 18 of 28 days in the loratadine group. The RQLQ scores were similar in the 2 groups initially. Significant improvement in the fluticasone group over the loratadine group was seen at the second and third visits in the overall score and activity, sleep, practical, and nasal domains of the RQLQ (P < .05). Symptom diaries showed a median score of 7.0 out of 21 for the loratadine-treated group and 4.0 out of 21 for the steroid-treated group (P = .005). Eosinophil count and ECP showed significant decreases in the steroid group.
This study shows that for as-needed treatment of allergic rhinitis, fluticasone propionate appears to be superior to loratadine in both subjective and objective measurements. A double-blind design would have strengthened our confidence in these results. Regular use of intranasal steroids has also been demonstrated to provide better symptom control than antihistamines do. The clinician may consider prescribing as-needed antihistamines or intranasal steroids for first-line treatment of allergic rhinitis.
ABSTRACT
BACKGROUND: Symptoms resulting from early response to allergen exposure are histamine mediated, last a few minutes, and often cue patients to take medication. Hours later, the late response begins and typically leads to symptoms of congestion. The late-phase response is not histamine mediated; other studies have shown intranasal corticosteroids to inhibit the response. The researchers tested the hypothesis that intranasal steroids may be as beneficial as or superior to antihistamines for as-needed use because of their effect on the late response to environmental allergens.
POPULATION STUDIED: The 88 subjects, aged 18 to 48 years, had fall seasonal rhinitis for at least 2 ragweed seasons before enrollment and had a positive puncture skin test to ragweed antigen extract. The population was 52% male, 60% white and in general good health. Patients were excluded for nasal polyps, displaced septum, perennial rhinitis, and signs or symptoms of renal, hepatic, or cardiovascular disease. Patients were also excluded if they had received immunotherapy within 2 years before enrollment or had taken topical or systemic steroids, antihistamines, decongestants, or cromolyn sodium within 2 weeks before enrollment.
STUDY DESIGN AND VALIDITY: This is a randomized unblinded study. Patients were enrolled before or during the early part of the ragweed season. They were randomized to receive 100 μg/day fluticasone propionate per nostril or 10 mg loratadine once daily as needed for 4 weeks. Nasal lavage for eosinophil count and eosinophil cationic protein (ECP) and completion of the Rhinoconjunctivitis Quality of Life Questionnaire (RQLQ, a validated instrument) were performed initially, at 2 weeks, and at 4 weeks. Patients were instructed to record medication usage and symptom severity in a diary twice daily. Itchy eyes and 3 symptoms for each nostril (rhinorrhea, nasal congestion, and sneezing) were rated on a scale of 0 to 3, ranging from 0 = no symptoms to 3 = severe symptoms.
OUTCOMES MEASURED: The RQLQ score was the primary outcome. The symptom diary scores were evaluated by symptom; a total symptom score was calculated. Other outcomes included nasal lavage eosinophil count and ECP levels.
RESULTS: Patients used medication an average of 17 of 28 days in the fluticasone group, similar to the average of 18 of 28 days in the loratadine group. The RQLQ scores were similar in the 2 groups initially. Significant improvement in the fluticasone group over the loratadine group was seen at the second and third visits in the overall score and activity, sleep, practical, and nasal domains of the RQLQ (P < .05). Symptom diaries showed a median score of 7.0 out of 21 for the loratadine-treated group and 4.0 out of 21 for the steroid-treated group (P = .005). Eosinophil count and ECP showed significant decreases in the steroid group.
This study shows that for as-needed treatment of allergic rhinitis, fluticasone propionate appears to be superior to loratadine in both subjective and objective measurements. A double-blind design would have strengthened our confidence in these results. Regular use of intranasal steroids has also been demonstrated to provide better symptom control than antihistamines do. The clinician may consider prescribing as-needed antihistamines or intranasal steroids for first-line treatment of allergic rhinitis.
In children hospitalized for asthma exacerbations, does adding ipratropium bromide to albuterol and corticosteroids improve outcome?
ABSTRACT
BACKGROUND: Adding 2 to 3 doses of ipratropium bromide (Atrovent) to conventional therapy with inhaled β-agonists and systemic corticosteroids improves lung function and decreases hospital admissions when given in the emergency department (ED). This study evaluated whether ipratropium bromide administration improves outcomes in children who require subsequent hospitalization.
POPULATION STUDIED: The authors enrolled 80 children aged 1 to 18 years with a history of asthma admitted to the pediatric inpatient unit of a tertiary-care urban hospital. Children had to have moderate to severe symptoms upon admission, defined as requiring inhaled β2-agonists at least every 2 hours, having a forced expiratory volume in 1 second (FEV1) of 25% to 80% of predicted, or having a clinical asthma score of 3 to 9 out of a possible 10. The clinical asthma score is a total of 5 items—respiratory rate, wheezing, inspiratory–expiratory ratio, retracting, and observed dyspnea—scored on a 3-point scale. Excluded patients had coexisting cardiac, neurologic, immunosuppressive, or other chronic pulmonary disease, hypersensitivity to the study drugs, or known ocular abnormalities. Children were excluded if their asthma score was 10, if they needed airway intervention, or if more than 12 hours had elapsed between the first nebulizer treatment and admission.
STUDY DESIGN AND VALIDITY: This was a double-blind randomized controlled trial. Study patients received frequent nebulized albuterol at 0.15 mg/kg as well as either IV hydrocortisone at 4 to 6 mg/kg every 6 hours or oral prednisone 1 mg/kg once daily. Attending physicians determined nebulizer treatment frequency, ranging from 30 minutes to 4 hours. Subjects were randomized to receive either ipratropium bromide or normal saline, matched to the albuterol dosing interval. Participants were stratified by age (less than 5 years vs 5 years or more) and by the number of ipratropium bromide doses they received in the ED (3 or less vs more than 3). Investigators used an intention-to-treat analysis and allocation was concealed.
OUTCOMES MEASURED: The primary outcome was the clinical asthma score, measured at baseline and every 6 hours until discharge. The clinical score is reproducible, valid, and predictive. Secondary outcomes included oxygen saturation, FEV1, length of stay, time to a 4-hour albuterol dosing interval, and readmission to the hospital or ED within 72 hours of discharge.
RESULTS: Of the 212 patients assessed for the trial, only 99 were eligible. Of these, 84 parents consented to enroll their children (4 children were later determined not to meet inclusion criteria and were excluded). The ipratropium and placebo groups were essentially the same. There was no difference in the asthma score between treatment and control groups in 3 of the 4 subgroups. In one subgroup—those who had fewer than 3 doses of ipratropium bromide in the ED—ipratropium provided a slight benefit. The difference in change in scores was 0.5 on the clinical asthma score, a statistically but not clinically important change. There were no differences in the secondary outcomes. The average heart rate was 6 to 10 beats per minute greater in the ipratropium group. The authors noted no transient anisocoria, a potential adverse effect of ipratropium bromide in children.
Giving ipratropium bromide to children with moderate to severe asthma exacerbations reduces admissions and asthma symptoms when given with appropriate β-agonists and corticosteroids in the ED. Ipratropium bromide provides no further benefit for children who require hospitalization after receiving the drug in the ED; therefore, adding ipratropium bromide to standard in-hospital care is not beneficial.
ABSTRACT
BACKGROUND: Adding 2 to 3 doses of ipratropium bromide (Atrovent) to conventional therapy with inhaled β-agonists and systemic corticosteroids improves lung function and decreases hospital admissions when given in the emergency department (ED). This study evaluated whether ipratropium bromide administration improves outcomes in children who require subsequent hospitalization.
POPULATION STUDIED: The authors enrolled 80 children aged 1 to 18 years with a history of asthma admitted to the pediatric inpatient unit of a tertiary-care urban hospital. Children had to have moderate to severe symptoms upon admission, defined as requiring inhaled β2-agonists at least every 2 hours, having a forced expiratory volume in 1 second (FEV1) of 25% to 80% of predicted, or having a clinical asthma score of 3 to 9 out of a possible 10. The clinical asthma score is a total of 5 items—respiratory rate, wheezing, inspiratory–expiratory ratio, retracting, and observed dyspnea—scored on a 3-point scale. Excluded patients had coexisting cardiac, neurologic, immunosuppressive, or other chronic pulmonary disease, hypersensitivity to the study drugs, or known ocular abnormalities. Children were excluded if their asthma score was 10, if they needed airway intervention, or if more than 12 hours had elapsed between the first nebulizer treatment and admission.
STUDY DESIGN AND VALIDITY: This was a double-blind randomized controlled trial. Study patients received frequent nebulized albuterol at 0.15 mg/kg as well as either IV hydrocortisone at 4 to 6 mg/kg every 6 hours or oral prednisone 1 mg/kg once daily. Attending physicians determined nebulizer treatment frequency, ranging from 30 minutes to 4 hours. Subjects were randomized to receive either ipratropium bromide or normal saline, matched to the albuterol dosing interval. Participants were stratified by age (less than 5 years vs 5 years or more) and by the number of ipratropium bromide doses they received in the ED (3 or less vs more than 3). Investigators used an intention-to-treat analysis and allocation was concealed.
OUTCOMES MEASURED: The primary outcome was the clinical asthma score, measured at baseline and every 6 hours until discharge. The clinical score is reproducible, valid, and predictive. Secondary outcomes included oxygen saturation, FEV1, length of stay, time to a 4-hour albuterol dosing interval, and readmission to the hospital or ED within 72 hours of discharge.
RESULTS: Of the 212 patients assessed for the trial, only 99 were eligible. Of these, 84 parents consented to enroll their children (4 children were later determined not to meet inclusion criteria and were excluded). The ipratropium and placebo groups were essentially the same. There was no difference in the asthma score between treatment and control groups in 3 of the 4 subgroups. In one subgroup—those who had fewer than 3 doses of ipratropium bromide in the ED—ipratropium provided a slight benefit. The difference in change in scores was 0.5 on the clinical asthma score, a statistically but not clinically important change. There were no differences in the secondary outcomes. The average heart rate was 6 to 10 beats per minute greater in the ipratropium group. The authors noted no transient anisocoria, a potential adverse effect of ipratropium bromide in children.
Giving ipratropium bromide to children with moderate to severe asthma exacerbations reduces admissions and asthma symptoms when given with appropriate β-agonists and corticosteroids in the ED. Ipratropium bromide provides no further benefit for children who require hospitalization after receiving the drug in the ED; therefore, adding ipratropium bromide to standard in-hospital care is not beneficial.
ABSTRACT
BACKGROUND: Adding 2 to 3 doses of ipratropium bromide (Atrovent) to conventional therapy with inhaled β-agonists and systemic corticosteroids improves lung function and decreases hospital admissions when given in the emergency department (ED). This study evaluated whether ipratropium bromide administration improves outcomes in children who require subsequent hospitalization.
POPULATION STUDIED: The authors enrolled 80 children aged 1 to 18 years with a history of asthma admitted to the pediatric inpatient unit of a tertiary-care urban hospital. Children had to have moderate to severe symptoms upon admission, defined as requiring inhaled β2-agonists at least every 2 hours, having a forced expiratory volume in 1 second (FEV1) of 25% to 80% of predicted, or having a clinical asthma score of 3 to 9 out of a possible 10. The clinical asthma score is a total of 5 items—respiratory rate, wheezing, inspiratory–expiratory ratio, retracting, and observed dyspnea—scored on a 3-point scale. Excluded patients had coexisting cardiac, neurologic, immunosuppressive, or other chronic pulmonary disease, hypersensitivity to the study drugs, or known ocular abnormalities. Children were excluded if their asthma score was 10, if they needed airway intervention, or if more than 12 hours had elapsed between the first nebulizer treatment and admission.
STUDY DESIGN AND VALIDITY: This was a double-blind randomized controlled trial. Study patients received frequent nebulized albuterol at 0.15 mg/kg as well as either IV hydrocortisone at 4 to 6 mg/kg every 6 hours or oral prednisone 1 mg/kg once daily. Attending physicians determined nebulizer treatment frequency, ranging from 30 minutes to 4 hours. Subjects were randomized to receive either ipratropium bromide or normal saline, matched to the albuterol dosing interval. Participants were stratified by age (less than 5 years vs 5 years or more) and by the number of ipratropium bromide doses they received in the ED (3 or less vs more than 3). Investigators used an intention-to-treat analysis and allocation was concealed.
OUTCOMES MEASURED: The primary outcome was the clinical asthma score, measured at baseline and every 6 hours until discharge. The clinical score is reproducible, valid, and predictive. Secondary outcomes included oxygen saturation, FEV1, length of stay, time to a 4-hour albuterol dosing interval, and readmission to the hospital or ED within 72 hours of discharge.
RESULTS: Of the 212 patients assessed for the trial, only 99 were eligible. Of these, 84 parents consented to enroll their children (4 children were later determined not to meet inclusion criteria and were excluded). The ipratropium and placebo groups were essentially the same. There was no difference in the asthma score between treatment and control groups in 3 of the 4 subgroups. In one subgroup—those who had fewer than 3 doses of ipratropium bromide in the ED—ipratropium provided a slight benefit. The difference in change in scores was 0.5 on the clinical asthma score, a statistically but not clinically important change. There were no differences in the secondary outcomes. The average heart rate was 6 to 10 beats per minute greater in the ipratropium group. The authors noted no transient anisocoria, a potential adverse effect of ipratropium bromide in children.
Giving ipratropium bromide to children with moderate to severe asthma exacerbations reduces admissions and asthma symptoms when given with appropriate β-agonists and corticosteroids in the ED. Ipratropium bromide provides no further benefit for children who require hospitalization after receiving the drug in the ED; therefore, adding ipratropium bromide to standard in-hospital care is not beneficial.
Are SSRIs and TCAs equally effective for the treatment of panic disorder?
ABSTRACT
BACKGROUND: Selective serotonin reuptake inhibitors (SSRIs) are commonly used as first-line treatment for panic disorder. However, comparative efficacy trials are lacking between older antidepressants, specifically the tricyclic antidepressants (TCAs), and SSRIs in the treatment of panic disorder. The authors use data gathered from efficacy trials to compare the efficacy, safety, and tolerability of SSRIs and TCAs used in the treatment of panic disorder.
POPULATION STUDIED: This meta-analysis included double-blind, placebo-controlled efficacy trials of SSRIs for panic disorder in patients with or without agoraphobia. The trials that met these established criteria were published from 1990 to 1998 and included 1741 patients (mean sample size: 145 patients). A comparison between the study populations could not be made since the trials did not contain patient demographic information for the SSRI and non-SSRI groups. This analysis excluded uncontrolled trials, case reports, and long-term or follow-up trials.
STUDY DESIGN AND VALIDITY: The authors used MEDLINE, PsychLIT, discussions with colleagues, and reference sections from related articles to identify double-blind, placebo-controlled efficacy trials of SSRIs for panic disorder. The authors conducted an effect-size analysis on the 12 trials identified. They compared these findings with the results of a recently published meta-analysis using non-SSRI treatments for panic disorder. In the fixed-dose trials, only the effective doses of SSRIs were used in the calculation of effect sizes.
OUTCOMES MEASURED: The main outcome is the effect sizes of the SSRI and TCA groups. The authors calculated the effect size by subtracting the mean score of the post treatment comparison group from that of the post treatment active treatment group and then dividing by the standard deviation of the post treatment comparison group. Tolerability was assessed by using the dropout rates for each study group.
RESULTS: The mean effect size for acute treatment outcome in the SSRI group compared with placebo was 0.55, a number not significantly different from that of the non-SSRI group (0.55) or, more specifically, the imipramine group (0.48). The older but smaller SSRI trials were associated with larger treatment effect sizes, whereas the larger, more recently published SSRI trials showed a smaller benefit. In addition, a funnel plot analysis showed that smaller studies with a lower effect size were missing (publication bias against ”negative” studies). The difference in the dropout rates between groups treated with SSRIs (24.6%), which were weighted to give larger trials a greater contribution, was not significantly different from that of the other antidepressants (25.4%), specifically impramine (22.4%). Using dropout rates as the only measure of tolerability may not be optimal. Not every patient who experienced adverse effects to the drug dropped out of the study. Patients may have also dropped out of the study for reasons other than poor tolerability to the drug.
This study fails to support the hypothesis that SSRIs are more efficacious and better tolerated when compared with older antidepressants in the treatment of panic disorder. These results also contradict the popular belief that SSRIs are generally more tolerable than TCAs. TCAs can provide patients with an effective, well-tolerated, less-costly treatment for panic disorder. A similar conclusion was reached in a comparison between TCAs and SSRIs in the treatment of depression.1
ABSTRACT
BACKGROUND: Selective serotonin reuptake inhibitors (SSRIs) are commonly used as first-line treatment for panic disorder. However, comparative efficacy trials are lacking between older antidepressants, specifically the tricyclic antidepressants (TCAs), and SSRIs in the treatment of panic disorder. The authors use data gathered from efficacy trials to compare the efficacy, safety, and tolerability of SSRIs and TCAs used in the treatment of panic disorder.
POPULATION STUDIED: This meta-analysis included double-blind, placebo-controlled efficacy trials of SSRIs for panic disorder in patients with or without agoraphobia. The trials that met these established criteria were published from 1990 to 1998 and included 1741 patients (mean sample size: 145 patients). A comparison between the study populations could not be made since the trials did not contain patient demographic information for the SSRI and non-SSRI groups. This analysis excluded uncontrolled trials, case reports, and long-term or follow-up trials.
STUDY DESIGN AND VALIDITY: The authors used MEDLINE, PsychLIT, discussions with colleagues, and reference sections from related articles to identify double-blind, placebo-controlled efficacy trials of SSRIs for panic disorder. The authors conducted an effect-size analysis on the 12 trials identified. They compared these findings with the results of a recently published meta-analysis using non-SSRI treatments for panic disorder. In the fixed-dose trials, only the effective doses of SSRIs were used in the calculation of effect sizes.
OUTCOMES MEASURED: The main outcome is the effect sizes of the SSRI and TCA groups. The authors calculated the effect size by subtracting the mean score of the post treatment comparison group from that of the post treatment active treatment group and then dividing by the standard deviation of the post treatment comparison group. Tolerability was assessed by using the dropout rates for each study group.
RESULTS: The mean effect size for acute treatment outcome in the SSRI group compared with placebo was 0.55, a number not significantly different from that of the non-SSRI group (0.55) or, more specifically, the imipramine group (0.48). The older but smaller SSRI trials were associated with larger treatment effect sizes, whereas the larger, more recently published SSRI trials showed a smaller benefit. In addition, a funnel plot analysis showed that smaller studies with a lower effect size were missing (publication bias against ”negative” studies). The difference in the dropout rates between groups treated with SSRIs (24.6%), which were weighted to give larger trials a greater contribution, was not significantly different from that of the other antidepressants (25.4%), specifically impramine (22.4%). Using dropout rates as the only measure of tolerability may not be optimal. Not every patient who experienced adverse effects to the drug dropped out of the study. Patients may have also dropped out of the study for reasons other than poor tolerability to the drug.
This study fails to support the hypothesis that SSRIs are more efficacious and better tolerated when compared with older antidepressants in the treatment of panic disorder. These results also contradict the popular belief that SSRIs are generally more tolerable than TCAs. TCAs can provide patients with an effective, well-tolerated, less-costly treatment for panic disorder. A similar conclusion was reached in a comparison between TCAs and SSRIs in the treatment of depression.1
ABSTRACT
BACKGROUND: Selective serotonin reuptake inhibitors (SSRIs) are commonly used as first-line treatment for panic disorder. However, comparative efficacy trials are lacking between older antidepressants, specifically the tricyclic antidepressants (TCAs), and SSRIs in the treatment of panic disorder. The authors use data gathered from efficacy trials to compare the efficacy, safety, and tolerability of SSRIs and TCAs used in the treatment of panic disorder.
POPULATION STUDIED: This meta-analysis included double-blind, placebo-controlled efficacy trials of SSRIs for panic disorder in patients with or without agoraphobia. The trials that met these established criteria were published from 1990 to 1998 and included 1741 patients (mean sample size: 145 patients). A comparison between the study populations could not be made since the trials did not contain patient demographic information for the SSRI and non-SSRI groups. This analysis excluded uncontrolled trials, case reports, and long-term or follow-up trials.
STUDY DESIGN AND VALIDITY: The authors used MEDLINE, PsychLIT, discussions with colleagues, and reference sections from related articles to identify double-blind, placebo-controlled efficacy trials of SSRIs for panic disorder. The authors conducted an effect-size analysis on the 12 trials identified. They compared these findings with the results of a recently published meta-analysis using non-SSRI treatments for panic disorder. In the fixed-dose trials, only the effective doses of SSRIs were used in the calculation of effect sizes.
OUTCOMES MEASURED: The main outcome is the effect sizes of the SSRI and TCA groups. The authors calculated the effect size by subtracting the mean score of the post treatment comparison group from that of the post treatment active treatment group and then dividing by the standard deviation of the post treatment comparison group. Tolerability was assessed by using the dropout rates for each study group.
RESULTS: The mean effect size for acute treatment outcome in the SSRI group compared with placebo was 0.55, a number not significantly different from that of the non-SSRI group (0.55) or, more specifically, the imipramine group (0.48). The older but smaller SSRI trials were associated with larger treatment effect sizes, whereas the larger, more recently published SSRI trials showed a smaller benefit. In addition, a funnel plot analysis showed that smaller studies with a lower effect size were missing (publication bias against ”negative” studies). The difference in the dropout rates between groups treated with SSRIs (24.6%), which were weighted to give larger trials a greater contribution, was not significantly different from that of the other antidepressants (25.4%), specifically impramine (22.4%). Using dropout rates as the only measure of tolerability may not be optimal. Not every patient who experienced adverse effects to the drug dropped out of the study. Patients may have also dropped out of the study for reasons other than poor tolerability to the drug.
This study fails to support the hypothesis that SSRIs are more efficacious and better tolerated when compared with older antidepressants in the treatment of panic disorder. These results also contradict the popular belief that SSRIs are generally more tolerable than TCAs. TCAs can provide patients with an effective, well-tolerated, less-costly treatment for panic disorder. A similar conclusion was reached in a comparison between TCAs and SSRIs in the treatment of depression.1
Should antioxidants be added to simvastatin and niacin for patients with coronary disease?
ABSTRACT
BACKGROUND: Antioxidant vitamins are commonly used in patients with coronary disease, but benefits have not been demonstrated. This randomized controlled trial studied whether addition of antioxidants to a simvastatin–niacin regimen improved outcomes.
POPULATION STUDIED: The investigators enrolled 160 patients with known coronary disease from the Seattle area and Canada. Subjects were included if they had clinical coronary disease (previous myocardial infarction [MI], coronary interventions, or confirmed angina); 3 or more coronary arteries with more than 30% stenosis or 1 stenosis more than 50%; high-density lipoprotein (HDL) cholesterol levels less than 35 mg/dL in men or 40 mg/dL in women; low-density lipoprotein (LDL) cholesterol levels less than 145 mg/dL; and triglyceride levels less than 400 mg/dL.
STUDY DESIGN AND VALIDITY: This was a double-blinded, placebo-controlled trial. Patients were randomly assigned to 1 of 4 regimens: simvastatin–niacin, antioxidant vitamins, simvastatin–niacin plus antioxidants, or placebo. Patients receiving simvastatin had their dose titrated to a goal LDL level of 40 to 90 mg/dL (mean final dose 13 mg/day). In patients receiving niacin, the dose was titrated over 1 month to at least 1000 mg twice per day (mean final dose 2.4 grams/day). Niacin 50 mg twice per day was used as the placebo to produce a flushing effect and thus keep patients blinded. Antioxidants were given twice daily, with total dosage of 800 IU vitamin E, 1000 mg vitamin C, 25 mg natural beta carotene, and 100 μg selenium. Coronary angiography was performed at baseline and finish; comparison of films was blinded. Patients were followed over 3 years. Analysis was by intent to treat with control for confounding with Cox proportional hazards.
OUTCOMES MEASURED: The primary clinical endpoint was the occurrence of a cardiovascular event: revascularization, nonfatal MI, or death from coronary causes. The angiographic primary endpoint was the change in stenosis of the most severe lesion in the 9 proximal coronary segments. Cost, quality of life, and patient satisfaction were not addressed.
RESULTS: The groups were similar at baseline, with the exception that diabetics were more prevalent in the group receiving simvastatin–niacin plus antioxidants and less prevalent in the simvastatin–niacin alone group (P = .04). Patients receiving simvastatin–niacin had significantly fewer cardiovascular events than those given placebo (21% vs 2.6%, P = .003, number needed to treat = 4.7). Addition of antioxidants actually blunted this effect: when antioxidant therapy was added to lipid lowering, the rate of clinical events increased to that observed with placebo. There was also no difference between patients receiving antioxidants alone and those receiving placebo. These clinical results were mirrored by the angiographic data: patients receiving simvastatin and niacin experienced a reduction in average coronary stenosis (P < .001), whereas all other groups showed an increase in stenosis (P < .005).
This well-designed study provides strong evidence that antioxidants should not be used in patients with preexisting coronary disease, either alone or in addition to simvastatin and niacin. The combination of a statin and niacin reduced adverse cardiac events dramatically in this population with low LDL cholesterol levels. Clinicians should keep in mind that these results may not be generalizable directly to women, people of color, or patients without coronary disease.
ABSTRACT
BACKGROUND: Antioxidant vitamins are commonly used in patients with coronary disease, but benefits have not been demonstrated. This randomized controlled trial studied whether addition of antioxidants to a simvastatin–niacin regimen improved outcomes.
POPULATION STUDIED: The investigators enrolled 160 patients with known coronary disease from the Seattle area and Canada. Subjects were included if they had clinical coronary disease (previous myocardial infarction [MI], coronary interventions, or confirmed angina); 3 or more coronary arteries with more than 30% stenosis or 1 stenosis more than 50%; high-density lipoprotein (HDL) cholesterol levels less than 35 mg/dL in men or 40 mg/dL in women; low-density lipoprotein (LDL) cholesterol levels less than 145 mg/dL; and triglyceride levels less than 400 mg/dL.
STUDY DESIGN AND VALIDITY: This was a double-blinded, placebo-controlled trial. Patients were randomly assigned to 1 of 4 regimens: simvastatin–niacin, antioxidant vitamins, simvastatin–niacin plus antioxidants, or placebo. Patients receiving simvastatin had their dose titrated to a goal LDL level of 40 to 90 mg/dL (mean final dose 13 mg/day). In patients receiving niacin, the dose was titrated over 1 month to at least 1000 mg twice per day (mean final dose 2.4 grams/day). Niacin 50 mg twice per day was used as the placebo to produce a flushing effect and thus keep patients blinded. Antioxidants were given twice daily, with total dosage of 800 IU vitamin E, 1000 mg vitamin C, 25 mg natural beta carotene, and 100 μg selenium. Coronary angiography was performed at baseline and finish; comparison of films was blinded. Patients were followed over 3 years. Analysis was by intent to treat with control for confounding with Cox proportional hazards.
OUTCOMES MEASURED: The primary clinical endpoint was the occurrence of a cardiovascular event: revascularization, nonfatal MI, or death from coronary causes. The angiographic primary endpoint was the change in stenosis of the most severe lesion in the 9 proximal coronary segments. Cost, quality of life, and patient satisfaction were not addressed.
RESULTS: The groups were similar at baseline, with the exception that diabetics were more prevalent in the group receiving simvastatin–niacin plus antioxidants and less prevalent in the simvastatin–niacin alone group (P = .04). Patients receiving simvastatin–niacin had significantly fewer cardiovascular events than those given placebo (21% vs 2.6%, P = .003, number needed to treat = 4.7). Addition of antioxidants actually blunted this effect: when antioxidant therapy was added to lipid lowering, the rate of clinical events increased to that observed with placebo. There was also no difference between patients receiving antioxidants alone and those receiving placebo. These clinical results were mirrored by the angiographic data: patients receiving simvastatin and niacin experienced a reduction in average coronary stenosis (P < .001), whereas all other groups showed an increase in stenosis (P < .005).
This well-designed study provides strong evidence that antioxidants should not be used in patients with preexisting coronary disease, either alone or in addition to simvastatin and niacin. The combination of a statin and niacin reduced adverse cardiac events dramatically in this population with low LDL cholesterol levels. Clinicians should keep in mind that these results may not be generalizable directly to women, people of color, or patients without coronary disease.
ABSTRACT
BACKGROUND: Antioxidant vitamins are commonly used in patients with coronary disease, but benefits have not been demonstrated. This randomized controlled trial studied whether addition of antioxidants to a simvastatin–niacin regimen improved outcomes.
POPULATION STUDIED: The investigators enrolled 160 patients with known coronary disease from the Seattle area and Canada. Subjects were included if they had clinical coronary disease (previous myocardial infarction [MI], coronary interventions, or confirmed angina); 3 or more coronary arteries with more than 30% stenosis or 1 stenosis more than 50%; high-density lipoprotein (HDL) cholesterol levels less than 35 mg/dL in men or 40 mg/dL in women; low-density lipoprotein (LDL) cholesterol levels less than 145 mg/dL; and triglyceride levels less than 400 mg/dL.
STUDY DESIGN AND VALIDITY: This was a double-blinded, placebo-controlled trial. Patients were randomly assigned to 1 of 4 regimens: simvastatin–niacin, antioxidant vitamins, simvastatin–niacin plus antioxidants, or placebo. Patients receiving simvastatin had their dose titrated to a goal LDL level of 40 to 90 mg/dL (mean final dose 13 mg/day). In patients receiving niacin, the dose was titrated over 1 month to at least 1000 mg twice per day (mean final dose 2.4 grams/day). Niacin 50 mg twice per day was used as the placebo to produce a flushing effect and thus keep patients blinded. Antioxidants were given twice daily, with total dosage of 800 IU vitamin E, 1000 mg vitamin C, 25 mg natural beta carotene, and 100 μg selenium. Coronary angiography was performed at baseline and finish; comparison of films was blinded. Patients were followed over 3 years. Analysis was by intent to treat with control for confounding with Cox proportional hazards.
OUTCOMES MEASURED: The primary clinical endpoint was the occurrence of a cardiovascular event: revascularization, nonfatal MI, or death from coronary causes. The angiographic primary endpoint was the change in stenosis of the most severe lesion in the 9 proximal coronary segments. Cost, quality of life, and patient satisfaction were not addressed.
RESULTS: The groups were similar at baseline, with the exception that diabetics were more prevalent in the group receiving simvastatin–niacin plus antioxidants and less prevalent in the simvastatin–niacin alone group (P = .04). Patients receiving simvastatin–niacin had significantly fewer cardiovascular events than those given placebo (21% vs 2.6%, P = .003, number needed to treat = 4.7). Addition of antioxidants actually blunted this effect: when antioxidant therapy was added to lipid lowering, the rate of clinical events increased to that observed with placebo. There was also no difference between patients receiving antioxidants alone and those receiving placebo. These clinical results were mirrored by the angiographic data: patients receiving simvastatin and niacin experienced a reduction in average coronary stenosis (P < .001), whereas all other groups showed an increase in stenosis (P < .005).
This well-designed study provides strong evidence that antioxidants should not be used in patients with preexisting coronary disease, either alone or in addition to simvastatin and niacin. The combination of a statin and niacin reduced adverse cardiac events dramatically in this population with low LDL cholesterol levels. Clinicians should keep in mind that these results may not be generalizable directly to women, people of color, or patients without coronary disease.