User login
Acarbose delays onset of type 2 diabetes mellitus
ABSTRACT
BACKGROUND: Patients who develop type 2 diabetes initially pass through a state of impaired glucose tolerance. Therapies that reduce resistance to insulin or protect β cells could prevent or delay the progression to diabetes.
POPULATION STUDIED: This multinational study was conducted in Canada, Israel, and Western Europe. Investigators recruited high-risk patients through newspaper advertising. They screened 14,742 individuals with a body mass index (BMI) between 25 and 40 kg/m 2 (mean 31.0 kg/m 2 ) with a 2-hour glucose tolerance test. Eligible subjects had impaired glucose tolerance, defined as a 2-hour plasma glucose concentration of 140 mg/dL (7.8 mmol/L) and <200 mg/dL (11.1 mmol/L). Investigators excluded subjects who had a serum creatinine concentration 1.5 mg/dL, or who had taken thiazide diuretics, β-blockers or nicotinic acid within the past 3 months.1 Ninety-seven percent of the 1429 randomized patients were white and 48% were men. The average age was 54.3 years.
STUDY DESIGN AND VALIDITY: This was a randomized, double-blind, placebo-controlled trial. Randomization was done at each center in a sequential manner in blocks of 4 and 6 patients, using a centrally generated random allocation sequence and numbered drug containers. Allocation was appropriately concealed. Treatment groups were comparable at baseline. To minimize gastrointestinal side effects, patients randomized to acarbose were started at 50 mg/day and gradually increased to a maximum of 100 mg 3 times a day with meals or to the maximum tolerated dose. The mean daily dose was 197 mg. All patients met with a dietitian before randomization and then yearly, were instructed in a weight reduction or maintenance program, and were encouraged to exercise. Patients saw a nurse every 3 months for a pill count and fasting plasma glucose measurement. Patients with abnormal fasting plasma glucose levels had a 2-hour oral glucose tolerance test, and all patients had a yearly glucose tolerance test. Patients were followed for a mean of 3.3 years. Ninety-six percent of patients were accounted for at the end of the trial. All patients at the end of the trial who were not diagnosed with diabetes were placed on placebo and followed for an additional 3 months. An intention-to-treat analysis was performed using appropriate statistical methods.
OUTCOMES MEASURED: The primary outcome measured was time to development of type 2 diabetes, defined by a plasma glucose concentration of >200 mg/dL (11.1 mmol/L) after a 2-hour glucose tolerance test.
RESULT: Patients treated with acarbose were less likely to develop type 2 diabetes after 3.3 years (17% vs 26%, numbers needed to treat = 11, P = .0003). The effectiveness of acarbose became apparent at 1 year. More patients taking acarbose dropped out of the trial secondary to gastrointestinal side effects (31% vs 18%, numbers needed to harm = 8, P < .0001). When acarbose was stopped at the end of the study period, more patients who had been treated with acarbose developed diabetes in the next 3 months than did patients who were treated with placebo (15% vs 11%).
Treating patients with impaired glucose tolerance with acarbose will delay the onset of type 2 diabetes for at least 3.3 years. It is unclear whether acarbose actually prevents diabetes or just delays its onset, and whether acarbose reduces morbidity or mortality secondary to diabetes. One third of patients who take acarbose will not tolerate the medication, which must probably be continued indefinitely to remain effective. Lifestyle modification, including dietary changes and regular moderate physical activity, should be the first-line therapy to prevent diabetes in patients with impaired glucose tolerance.2 Acarbose can be used for patients who are not willing or able to change behavior.
ABSTRACT
BACKGROUND: Patients who develop type 2 diabetes initially pass through a state of impaired glucose tolerance. Therapies that reduce resistance to insulin or protect β cells could prevent or delay the progression to diabetes.
POPULATION STUDIED: This multinational study was conducted in Canada, Israel, and Western Europe. Investigators recruited high-risk patients through newspaper advertising. They screened 14,742 individuals with a body mass index (BMI) between 25 and 40 kg/m 2 (mean 31.0 kg/m 2 ) with a 2-hour glucose tolerance test. Eligible subjects had impaired glucose tolerance, defined as a 2-hour plasma glucose concentration of 140 mg/dL (7.8 mmol/L) and <200 mg/dL (11.1 mmol/L). Investigators excluded subjects who had a serum creatinine concentration 1.5 mg/dL, or who had taken thiazide diuretics, β-blockers or nicotinic acid within the past 3 months.1 Ninety-seven percent of the 1429 randomized patients were white and 48% were men. The average age was 54.3 years.
STUDY DESIGN AND VALIDITY: This was a randomized, double-blind, placebo-controlled trial. Randomization was done at each center in a sequential manner in blocks of 4 and 6 patients, using a centrally generated random allocation sequence and numbered drug containers. Allocation was appropriately concealed. Treatment groups were comparable at baseline. To minimize gastrointestinal side effects, patients randomized to acarbose were started at 50 mg/day and gradually increased to a maximum of 100 mg 3 times a day with meals or to the maximum tolerated dose. The mean daily dose was 197 mg. All patients met with a dietitian before randomization and then yearly, were instructed in a weight reduction or maintenance program, and were encouraged to exercise. Patients saw a nurse every 3 months for a pill count and fasting plasma glucose measurement. Patients with abnormal fasting plasma glucose levels had a 2-hour oral glucose tolerance test, and all patients had a yearly glucose tolerance test. Patients were followed for a mean of 3.3 years. Ninety-six percent of patients were accounted for at the end of the trial. All patients at the end of the trial who were not diagnosed with diabetes were placed on placebo and followed for an additional 3 months. An intention-to-treat analysis was performed using appropriate statistical methods.
OUTCOMES MEASURED: The primary outcome measured was time to development of type 2 diabetes, defined by a plasma glucose concentration of >200 mg/dL (11.1 mmol/L) after a 2-hour glucose tolerance test.
RESULT: Patients treated with acarbose were less likely to develop type 2 diabetes after 3.3 years (17% vs 26%, numbers needed to treat = 11, P = .0003). The effectiveness of acarbose became apparent at 1 year. More patients taking acarbose dropped out of the trial secondary to gastrointestinal side effects (31% vs 18%, numbers needed to harm = 8, P < .0001). When acarbose was stopped at the end of the study period, more patients who had been treated with acarbose developed diabetes in the next 3 months than did patients who were treated with placebo (15% vs 11%).
Treating patients with impaired glucose tolerance with acarbose will delay the onset of type 2 diabetes for at least 3.3 years. It is unclear whether acarbose actually prevents diabetes or just delays its onset, and whether acarbose reduces morbidity or mortality secondary to diabetes. One third of patients who take acarbose will not tolerate the medication, which must probably be continued indefinitely to remain effective. Lifestyle modification, including dietary changes and regular moderate physical activity, should be the first-line therapy to prevent diabetes in patients with impaired glucose tolerance.2 Acarbose can be used for patients who are not willing or able to change behavior.
ABSTRACT
BACKGROUND: Patients who develop type 2 diabetes initially pass through a state of impaired glucose tolerance. Therapies that reduce resistance to insulin or protect β cells could prevent or delay the progression to diabetes.
POPULATION STUDIED: This multinational study was conducted in Canada, Israel, and Western Europe. Investigators recruited high-risk patients through newspaper advertising. They screened 14,742 individuals with a body mass index (BMI) between 25 and 40 kg/m 2 (mean 31.0 kg/m 2 ) with a 2-hour glucose tolerance test. Eligible subjects had impaired glucose tolerance, defined as a 2-hour plasma glucose concentration of 140 mg/dL (7.8 mmol/L) and <200 mg/dL (11.1 mmol/L). Investigators excluded subjects who had a serum creatinine concentration 1.5 mg/dL, or who had taken thiazide diuretics, β-blockers or nicotinic acid within the past 3 months.1 Ninety-seven percent of the 1429 randomized patients were white and 48% were men. The average age was 54.3 years.
STUDY DESIGN AND VALIDITY: This was a randomized, double-blind, placebo-controlled trial. Randomization was done at each center in a sequential manner in blocks of 4 and 6 patients, using a centrally generated random allocation sequence and numbered drug containers. Allocation was appropriately concealed. Treatment groups were comparable at baseline. To minimize gastrointestinal side effects, patients randomized to acarbose were started at 50 mg/day and gradually increased to a maximum of 100 mg 3 times a day with meals or to the maximum tolerated dose. The mean daily dose was 197 mg. All patients met with a dietitian before randomization and then yearly, were instructed in a weight reduction or maintenance program, and were encouraged to exercise. Patients saw a nurse every 3 months for a pill count and fasting plasma glucose measurement. Patients with abnormal fasting plasma glucose levels had a 2-hour oral glucose tolerance test, and all patients had a yearly glucose tolerance test. Patients were followed for a mean of 3.3 years. Ninety-six percent of patients were accounted for at the end of the trial. All patients at the end of the trial who were not diagnosed with diabetes were placed on placebo and followed for an additional 3 months. An intention-to-treat analysis was performed using appropriate statistical methods.
OUTCOMES MEASURED: The primary outcome measured was time to development of type 2 diabetes, defined by a plasma glucose concentration of >200 mg/dL (11.1 mmol/L) after a 2-hour glucose tolerance test.
RESULT: Patients treated with acarbose were less likely to develop type 2 diabetes after 3.3 years (17% vs 26%, numbers needed to treat = 11, P = .0003). The effectiveness of acarbose became apparent at 1 year. More patients taking acarbose dropped out of the trial secondary to gastrointestinal side effects (31% vs 18%, numbers needed to harm = 8, P < .0001). When acarbose was stopped at the end of the study period, more patients who had been treated with acarbose developed diabetes in the next 3 months than did patients who were treated with placebo (15% vs 11%).
Treating patients with impaired glucose tolerance with acarbose will delay the onset of type 2 diabetes for at least 3.3 years. It is unclear whether acarbose actually prevents diabetes or just delays its onset, and whether acarbose reduces morbidity or mortality secondary to diabetes. One third of patients who take acarbose will not tolerate the medication, which must probably be continued indefinitely to remain effective. Lifestyle modification, including dietary changes and regular moderate physical activity, should be the first-line therapy to prevent diabetes in patients with impaired glucose tolerance.2 Acarbose can be used for patients who are not willing or able to change behavior.
Azithromycin no more effective than vitamin C for acute bronchitis
ABSTRACT
BACKGROUND: The results of studies evaluating the effectiveness of antibiotic treatment for acute bronchitis are conflicting, some with uncertain reliability and validity. Although most studies of antibiotics have focused on cure of disease or reduction in symptoms, this study tested whether patients with acute bronchitis who were treated with azithromycin experienced greater improvements in health-related quality of life than those treated with vitamin C. The authors chose to compare azithromycin with vitamin C instead of traditional placebo because they believed potential patients might refuse to participate in the study if there was a chance they would receive a placebo. Evidence has shown that vitamin C at the doses used in this study is ineffective in the treatment of acute bronchitis or other respiratory illnesses, making the vitamin a reasonable placebo for this study.1
POPULATION STUDIED: The authors studied 220 adults with cough lasting 2–14 days who were diagnosed with acute bronchitis after presenting to an ambulatory screening clinic in Chicago, Illinois. Patients were excluded if they had any underlying lung disorder, clinical characteristics of pneumonia, antibiotic treatment within the previous 2 weeks, pregnancy, steroid treatment, or had been started on an angiotensin-converting enzyme inhibitor within the previous 4 weeks.
STUDY DESIGN AND VALIDITY: This study was a randomized, double-blinded, controlled trial with concealed allocation. Patients were randomized to receive a total of 1.5 g of either azithromycin or vitamin C over 5 days (500 mg on the first day, then 250 mg/day for 4 more days). All patients also received symptomatic care with dex-tromethorphan and an albuterol inhaler with a spacer. Trained research assistants interviewed patients on enrollment in the study to assess their baseline health-related quality of life. The interview, consisting of 22 questions adapted from similar instruments developed at McMaster University, was repeated on days 3 and 7. For each of the questions, patients were asked to rate how troubled they had been during the previous few days as a result of their bronchitis symptoms on a 7-point scale. Follow-up was for 7 days from the beginning of the study and was 85.9% complete. Analysis was by intention to treat.
OUTCOMES MEASURED: The primary outcome measured was health-related quality of life on day 7 of follow-up. Secondary end points were return to usual daily activities at follow-up and adverse effects.
RESULTS: The adjusted difference in health-related quality of life between the patients taking azithromycin and those taking vitamin C was not significant on day 7 of the study (difference = 0.03; 95% confidence interval [CI], –0.20 to 0.26). Overall, 89% of patients in both groups returned to work by day 7 (difference = 0.5%; 95% CI, –10% to 9%). No difference was noted in the fre treat acute bronchitis in otherwise healthy adults.
Azithromycin is no more effective than vitamin C in treating acute bronchitis in healthy adults. Given the evidence that treatment with vitamin C is not effective in respiratory illnesses, azithromycin appears equally ineffective. With increasing health care costs and rising concerns about antibiotic resistance, azithromycin, and probably other antibiotics, should not be used to treat acute bronchitis in otherwise healthy adults.
ABSTRACT
BACKGROUND: The results of studies evaluating the effectiveness of antibiotic treatment for acute bronchitis are conflicting, some with uncertain reliability and validity. Although most studies of antibiotics have focused on cure of disease or reduction in symptoms, this study tested whether patients with acute bronchitis who were treated with azithromycin experienced greater improvements in health-related quality of life than those treated with vitamin C. The authors chose to compare azithromycin with vitamin C instead of traditional placebo because they believed potential patients might refuse to participate in the study if there was a chance they would receive a placebo. Evidence has shown that vitamin C at the doses used in this study is ineffective in the treatment of acute bronchitis or other respiratory illnesses, making the vitamin a reasonable placebo for this study.1
POPULATION STUDIED: The authors studied 220 adults with cough lasting 2–14 days who were diagnosed with acute bronchitis after presenting to an ambulatory screening clinic in Chicago, Illinois. Patients were excluded if they had any underlying lung disorder, clinical characteristics of pneumonia, antibiotic treatment within the previous 2 weeks, pregnancy, steroid treatment, or had been started on an angiotensin-converting enzyme inhibitor within the previous 4 weeks.
STUDY DESIGN AND VALIDITY: This study was a randomized, double-blinded, controlled trial with concealed allocation. Patients were randomized to receive a total of 1.5 g of either azithromycin or vitamin C over 5 days (500 mg on the first day, then 250 mg/day for 4 more days). All patients also received symptomatic care with dex-tromethorphan and an albuterol inhaler with a spacer. Trained research assistants interviewed patients on enrollment in the study to assess their baseline health-related quality of life. The interview, consisting of 22 questions adapted from similar instruments developed at McMaster University, was repeated on days 3 and 7. For each of the questions, patients were asked to rate how troubled they had been during the previous few days as a result of their bronchitis symptoms on a 7-point scale. Follow-up was for 7 days from the beginning of the study and was 85.9% complete. Analysis was by intention to treat.
OUTCOMES MEASURED: The primary outcome measured was health-related quality of life on day 7 of follow-up. Secondary end points were return to usual daily activities at follow-up and adverse effects.
RESULTS: The adjusted difference in health-related quality of life between the patients taking azithromycin and those taking vitamin C was not significant on day 7 of the study (difference = 0.03; 95% confidence interval [CI], –0.20 to 0.26). Overall, 89% of patients in both groups returned to work by day 7 (difference = 0.5%; 95% CI, –10% to 9%). No difference was noted in the fre treat acute bronchitis in otherwise healthy adults.
Azithromycin is no more effective than vitamin C in treating acute bronchitis in healthy adults. Given the evidence that treatment with vitamin C is not effective in respiratory illnesses, azithromycin appears equally ineffective. With increasing health care costs and rising concerns about antibiotic resistance, azithromycin, and probably other antibiotics, should not be used to treat acute bronchitis in otherwise healthy adults.
ABSTRACT
BACKGROUND: The results of studies evaluating the effectiveness of antibiotic treatment for acute bronchitis are conflicting, some with uncertain reliability and validity. Although most studies of antibiotics have focused on cure of disease or reduction in symptoms, this study tested whether patients with acute bronchitis who were treated with azithromycin experienced greater improvements in health-related quality of life than those treated with vitamin C. The authors chose to compare azithromycin with vitamin C instead of traditional placebo because they believed potential patients might refuse to participate in the study if there was a chance they would receive a placebo. Evidence has shown that vitamin C at the doses used in this study is ineffective in the treatment of acute bronchitis or other respiratory illnesses, making the vitamin a reasonable placebo for this study.1
POPULATION STUDIED: The authors studied 220 adults with cough lasting 2–14 days who were diagnosed with acute bronchitis after presenting to an ambulatory screening clinic in Chicago, Illinois. Patients were excluded if they had any underlying lung disorder, clinical characteristics of pneumonia, antibiotic treatment within the previous 2 weeks, pregnancy, steroid treatment, or had been started on an angiotensin-converting enzyme inhibitor within the previous 4 weeks.
STUDY DESIGN AND VALIDITY: This study was a randomized, double-blinded, controlled trial with concealed allocation. Patients were randomized to receive a total of 1.5 g of either azithromycin or vitamin C over 5 days (500 mg on the first day, then 250 mg/day for 4 more days). All patients also received symptomatic care with dex-tromethorphan and an albuterol inhaler with a spacer. Trained research assistants interviewed patients on enrollment in the study to assess their baseline health-related quality of life. The interview, consisting of 22 questions adapted from similar instruments developed at McMaster University, was repeated on days 3 and 7. For each of the questions, patients were asked to rate how troubled they had been during the previous few days as a result of their bronchitis symptoms on a 7-point scale. Follow-up was for 7 days from the beginning of the study and was 85.9% complete. Analysis was by intention to treat.
OUTCOMES MEASURED: The primary outcome measured was health-related quality of life on day 7 of follow-up. Secondary end points were return to usual daily activities at follow-up and adverse effects.
RESULTS: The adjusted difference in health-related quality of life between the patients taking azithromycin and those taking vitamin C was not significant on day 7 of the study (difference = 0.03; 95% confidence interval [CI], –0.20 to 0.26). Overall, 89% of patients in both groups returned to work by day 7 (difference = 0.5%; 95% CI, –10% to 9%). No difference was noted in the fre treat acute bronchitis in otherwise healthy adults.
Azithromycin is no more effective than vitamin C in treating acute bronchitis in healthy adults. Given the evidence that treatment with vitamin C is not effective in respiratory illnesses, azithromycin appears equally ineffective. With increasing health care costs and rising concerns about antibiotic resistance, azithromycin, and probably other antibiotics, should not be used to treat acute bronchitis in otherwise healthy adults.
Inhaled fluticasone superior to montelukast in persistent asthma
ABSTRACT
BACKGROUND: Asthma management guidelines recommend patients with persistent asthma use asthma controller therapy in addition to as-needed short-acting beta-agonist therapy to improve symptom control, maintain pulmonary function, and decrease exacerbations. This study compared 2 asthma controllers, inhaled fluticasone and oral montelukast, with respect to clinical efficacy, patient preference, asthma-specific quality of life, and safety.
POPULATION STUDIED: The patients in this study were men and women aged 15 years and older with asthma recruited from multiple centers across the United States. Nonsmoking patients were included with a forced expiratory volume in 1 second (FEV 1 ) of 50% to 80% of predicted that reversed by at least 15% with bronchodilator use. Patients were then eligible for randomization if, after an 8- to 14-day run-in period, their FEV 1 remained within 15% of initial values, they used albuterol at least 6 of the last 7 days, and they had asthma symptom scores of 2 (on a 0 to 5 scale) for at least 4 of the last 7 days.
STUDY DESIGN AND VALIDITY: This study was a double-blinded, randomized trial sponsored by the makers of fluticasone. Patients meeting initial inclusion criteria underwent an 8- to 14-day run-in period in which only short-acting beta-agonist use was allowed. Patients were then randomized to 1 of 2 treatment groups if they met the secondary inclusion criteria. Personal communication with the lead author confirmed that allocation assignment was concealed. Patients received either fluticasone 88 μg twice daily via metered dose inhaler (MDI) and montelukast placebo, or montelukast 10 mg daily with a placebo MDI. Patients kept daily records and had clinical evaluations at regular intervals for 24 weeks. Seventy-six percent of the patients completed the study.
OUTCOMES MEASURED: The primary outcome was percent change in FEV 1 . Other outcomes included peak flow rate, symptom-free days, daily albuterol use, asthma symptom scores, asthma quality-of-life scores, and patient-rated satisfaction with treatment. Safety was also assessed by reports of clinical adverse events and number of asthma exacerbations.
RESULTS: Using an intent-to-treat analysis, the fluticasone group had a significantly greater sustained change in FEV 1 (22% vs 14%; P < .001). Significant differences were noted after just 2 weeks of treatment. Significant differences favoring fluticasone were also found in all secondary outcomes including the patient-oriented outcomes of change in asthma symptom scores (–0.91 vs –0.57; P < .001), asthma quality-of-life scores (1.3 vs 1.0; P = .004), and patient-rated satisfaction with treatment (83% of fluticasone patients satisfied vs 66% of montelukast patients satisfied; P < .001). No differences were noted in overall incidence of adverse events between treatment groups, but significantly more fluticasone-treated patients reported hoarseness (9 vs 0; P = .002) and oral pharyngeal candidiasis (8 vs 0; P = .008). The incidence of asthma exacerbations was similar (19 fluticasone-treated patients vs 21 montelukast-treated patients).
This study confirms earlier studies indicating that inhaled steroids should be first-line treatment for moderate-to-severe persistent asthma. When compared with montelukast, inhaled fluticasone showed greater improvements in clinical measures of asthma, as well as patient-oriented measures such as symptom scores, quality-of-life scores, and patientrated satisfaction. However, moderate-to-severe persistent asthma appears to require more therapeutic measures than just low-dose fluticasone. Despite treatment, patients still used albuterol on more than half of the days, only one third of days were symptom-free, and symptom scores improved by less than 1 point on a 6-point scale.
ABSTRACT
BACKGROUND: Asthma management guidelines recommend patients with persistent asthma use asthma controller therapy in addition to as-needed short-acting beta-agonist therapy to improve symptom control, maintain pulmonary function, and decrease exacerbations. This study compared 2 asthma controllers, inhaled fluticasone and oral montelukast, with respect to clinical efficacy, patient preference, asthma-specific quality of life, and safety.
POPULATION STUDIED: The patients in this study were men and women aged 15 years and older with asthma recruited from multiple centers across the United States. Nonsmoking patients were included with a forced expiratory volume in 1 second (FEV 1 ) of 50% to 80% of predicted that reversed by at least 15% with bronchodilator use. Patients were then eligible for randomization if, after an 8- to 14-day run-in period, their FEV 1 remained within 15% of initial values, they used albuterol at least 6 of the last 7 days, and they had asthma symptom scores of 2 (on a 0 to 5 scale) for at least 4 of the last 7 days.
STUDY DESIGN AND VALIDITY: This study was a double-blinded, randomized trial sponsored by the makers of fluticasone. Patients meeting initial inclusion criteria underwent an 8- to 14-day run-in period in which only short-acting beta-agonist use was allowed. Patients were then randomized to 1 of 2 treatment groups if they met the secondary inclusion criteria. Personal communication with the lead author confirmed that allocation assignment was concealed. Patients received either fluticasone 88 μg twice daily via metered dose inhaler (MDI) and montelukast placebo, or montelukast 10 mg daily with a placebo MDI. Patients kept daily records and had clinical evaluations at regular intervals for 24 weeks. Seventy-six percent of the patients completed the study.
OUTCOMES MEASURED: The primary outcome was percent change in FEV 1 . Other outcomes included peak flow rate, symptom-free days, daily albuterol use, asthma symptom scores, asthma quality-of-life scores, and patient-rated satisfaction with treatment. Safety was also assessed by reports of clinical adverse events and number of asthma exacerbations.
RESULTS: Using an intent-to-treat analysis, the fluticasone group had a significantly greater sustained change in FEV 1 (22% vs 14%; P < .001). Significant differences were noted after just 2 weeks of treatment. Significant differences favoring fluticasone were also found in all secondary outcomes including the patient-oriented outcomes of change in asthma symptom scores (–0.91 vs –0.57; P < .001), asthma quality-of-life scores (1.3 vs 1.0; P = .004), and patient-rated satisfaction with treatment (83% of fluticasone patients satisfied vs 66% of montelukast patients satisfied; P < .001). No differences were noted in overall incidence of adverse events between treatment groups, but significantly more fluticasone-treated patients reported hoarseness (9 vs 0; P = .002) and oral pharyngeal candidiasis (8 vs 0; P = .008). The incidence of asthma exacerbations was similar (19 fluticasone-treated patients vs 21 montelukast-treated patients).
This study confirms earlier studies indicating that inhaled steroids should be first-line treatment for moderate-to-severe persistent asthma. When compared with montelukast, inhaled fluticasone showed greater improvements in clinical measures of asthma, as well as patient-oriented measures such as symptom scores, quality-of-life scores, and patientrated satisfaction. However, moderate-to-severe persistent asthma appears to require more therapeutic measures than just low-dose fluticasone. Despite treatment, patients still used albuterol on more than half of the days, only one third of days were symptom-free, and symptom scores improved by less than 1 point on a 6-point scale.
ABSTRACT
BACKGROUND: Asthma management guidelines recommend patients with persistent asthma use asthma controller therapy in addition to as-needed short-acting beta-agonist therapy to improve symptom control, maintain pulmonary function, and decrease exacerbations. This study compared 2 asthma controllers, inhaled fluticasone and oral montelukast, with respect to clinical efficacy, patient preference, asthma-specific quality of life, and safety.
POPULATION STUDIED: The patients in this study were men and women aged 15 years and older with asthma recruited from multiple centers across the United States. Nonsmoking patients were included with a forced expiratory volume in 1 second (FEV 1 ) of 50% to 80% of predicted that reversed by at least 15% with bronchodilator use. Patients were then eligible for randomization if, after an 8- to 14-day run-in period, their FEV 1 remained within 15% of initial values, they used albuterol at least 6 of the last 7 days, and they had asthma symptom scores of 2 (on a 0 to 5 scale) for at least 4 of the last 7 days.
STUDY DESIGN AND VALIDITY: This study was a double-blinded, randomized trial sponsored by the makers of fluticasone. Patients meeting initial inclusion criteria underwent an 8- to 14-day run-in period in which only short-acting beta-agonist use was allowed. Patients were then randomized to 1 of 2 treatment groups if they met the secondary inclusion criteria. Personal communication with the lead author confirmed that allocation assignment was concealed. Patients received either fluticasone 88 μg twice daily via metered dose inhaler (MDI) and montelukast placebo, or montelukast 10 mg daily with a placebo MDI. Patients kept daily records and had clinical evaluations at regular intervals for 24 weeks. Seventy-six percent of the patients completed the study.
OUTCOMES MEASURED: The primary outcome was percent change in FEV 1 . Other outcomes included peak flow rate, symptom-free days, daily albuterol use, asthma symptom scores, asthma quality-of-life scores, and patient-rated satisfaction with treatment. Safety was also assessed by reports of clinical adverse events and number of asthma exacerbations.
RESULTS: Using an intent-to-treat analysis, the fluticasone group had a significantly greater sustained change in FEV 1 (22% vs 14%; P < .001). Significant differences were noted after just 2 weeks of treatment. Significant differences favoring fluticasone were also found in all secondary outcomes including the patient-oriented outcomes of change in asthma symptom scores (–0.91 vs –0.57; P < .001), asthma quality-of-life scores (1.3 vs 1.0; P = .004), and patient-rated satisfaction with treatment (83% of fluticasone patients satisfied vs 66% of montelukast patients satisfied; P < .001). No differences were noted in overall incidence of adverse events between treatment groups, but significantly more fluticasone-treated patients reported hoarseness (9 vs 0; P = .002) and oral pharyngeal candidiasis (8 vs 0; P = .008). The incidence of asthma exacerbations was similar (19 fluticasone-treated patients vs 21 montelukast-treated patients).
This study confirms earlier studies indicating that inhaled steroids should be first-line treatment for moderate-to-severe persistent asthma. When compared with montelukast, inhaled fluticasone showed greater improvements in clinical measures of asthma, as well as patient-oriented measures such as symptom scores, quality-of-life scores, and patientrated satisfaction. However, moderate-to-severe persistent asthma appears to require more therapeutic measures than just low-dose fluticasone. Despite treatment, patients still used albuterol on more than half of the days, only one third of days were symptom-free, and symptom scores improved by less than 1 point on a 6-point scale.
Carbamazepine effective for alcohol withdrawal
ABSTRACT
BACKGROUND: Outpatient management of symptoms from acute alcohol withdrawal usually includes a tapering regimen of a benzodiazepine such as lorazepam (Ativan). Benzodiazepine use is usually limited, however, by the potential for medication abuse and side effects such as central nervous system impairment. Because studies have demonstrated that carbamazepine can be effective for the treatment of alcohol withdrawal symptoms, this study compared the effectiveness of carbamazepine with that of lorazepam.
POPULATION STUDIED: The 136 patients were self-referred and fulfilled Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV) criteria for alcohol dependence and alcohol withdrawal. Patients lived within 50 miles of the study site, and had an admission blood alcohol level < 0.1 g/dL, a Mini Mental State Examination score of 26, and an admission score on the Clinical Institute Withdrawal Assessment-Alcohol, revised (CIWA-Ar) 10 out of a possible score of 20. Patients were excluded if they had substance abuse syndromes other than alcohol dependence, nicotine dependence, or cannabis abuse; major Axis I psychiatric disorder; used benzodiazepines, beta-blockers, calcium channel blockers, or antipsychotic agents within the past 30 days; a history of head injury; neurologic illness; or grossly abnormal laboratory values.
STUDY DESIGN AND VALIDITY: This was a randomized double-blind trial comparing 2 different treatments for alcohol withdrawal. Allocation to treatment group was concealed from enrolling physicians. The patients received a 5-day taper of either lorazepam 6–8 mg tapered to 2 mg or carbamazepine 600–800 mg tapered to 200 mg. Withdrawal symptoms were measured using a validated CIWA-Ar tool. Patients also completed a daily drinking log to assess alcohol use prior to, during, and 7 days after study completion. The study evaluated 89 patients after the treatment period for number of drinks taken per day.
OUTCOMES MEASURED: Alcohol withdrawal symptoms and posttreatment alcohol use measured by the CIWA-Ar scale were the primary outcomes. Side effects were reported as a secondary outcome.
RESULTS: Both drugs were equally effective in reducing alcohol withdrawal symptoms. Over time, alcohol withdrawal symptoms were more likely to occur with lorazepam treatment (P = .007). After treatment, relapsing patients receiving carbamazepine had fewer drinks per day than those receiving lorazepam (1 vs 3; P = .003). Effectiveness varied based on whether patients had attempted alcohol detoxification in the past. Of the patients who reported prior multiple detoxifications, those receiving carbamazepine drank less than 1 drink per day as compared with 5 drinks per day in the lorazepam-treated group (P = .004). The overall frequency of side effects were the same for both groups; however, clinicians recorded dizziness and incoordination in more patients on lorazepam than carbamazepine (22.7% vs 6.9%; P = .02). Pruritus occurred more often in the carbamazepine group than the lorazepam group (18.9% vs 1.3%; P = .004).
Carbamazepine is an effective alternative to benzodiazepines for the outpatient treatment of alcoholic withdrawal symptoms. Carbamazepine appears to be particularly effective for patients in whom detoxification failed in the past.
ABSTRACT
BACKGROUND: Outpatient management of symptoms from acute alcohol withdrawal usually includes a tapering regimen of a benzodiazepine such as lorazepam (Ativan). Benzodiazepine use is usually limited, however, by the potential for medication abuse and side effects such as central nervous system impairment. Because studies have demonstrated that carbamazepine can be effective for the treatment of alcohol withdrawal symptoms, this study compared the effectiveness of carbamazepine with that of lorazepam.
POPULATION STUDIED: The 136 patients were self-referred and fulfilled Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV) criteria for alcohol dependence and alcohol withdrawal. Patients lived within 50 miles of the study site, and had an admission blood alcohol level < 0.1 g/dL, a Mini Mental State Examination score of 26, and an admission score on the Clinical Institute Withdrawal Assessment-Alcohol, revised (CIWA-Ar) 10 out of a possible score of 20. Patients were excluded if they had substance abuse syndromes other than alcohol dependence, nicotine dependence, or cannabis abuse; major Axis I psychiatric disorder; used benzodiazepines, beta-blockers, calcium channel blockers, or antipsychotic agents within the past 30 days; a history of head injury; neurologic illness; or grossly abnormal laboratory values.
STUDY DESIGN AND VALIDITY: This was a randomized double-blind trial comparing 2 different treatments for alcohol withdrawal. Allocation to treatment group was concealed from enrolling physicians. The patients received a 5-day taper of either lorazepam 6–8 mg tapered to 2 mg or carbamazepine 600–800 mg tapered to 200 mg. Withdrawal symptoms were measured using a validated CIWA-Ar tool. Patients also completed a daily drinking log to assess alcohol use prior to, during, and 7 days after study completion. The study evaluated 89 patients after the treatment period for number of drinks taken per day.
OUTCOMES MEASURED: Alcohol withdrawal symptoms and posttreatment alcohol use measured by the CIWA-Ar scale were the primary outcomes. Side effects were reported as a secondary outcome.
RESULTS: Both drugs were equally effective in reducing alcohol withdrawal symptoms. Over time, alcohol withdrawal symptoms were more likely to occur with lorazepam treatment (P = .007). After treatment, relapsing patients receiving carbamazepine had fewer drinks per day than those receiving lorazepam (1 vs 3; P = .003). Effectiveness varied based on whether patients had attempted alcohol detoxification in the past. Of the patients who reported prior multiple detoxifications, those receiving carbamazepine drank less than 1 drink per day as compared with 5 drinks per day in the lorazepam-treated group (P = .004). The overall frequency of side effects were the same for both groups; however, clinicians recorded dizziness and incoordination in more patients on lorazepam than carbamazepine (22.7% vs 6.9%; P = .02). Pruritus occurred more often in the carbamazepine group than the lorazepam group (18.9% vs 1.3%; P = .004).
Carbamazepine is an effective alternative to benzodiazepines for the outpatient treatment of alcoholic withdrawal symptoms. Carbamazepine appears to be particularly effective for patients in whom detoxification failed in the past.
ABSTRACT
BACKGROUND: Outpatient management of symptoms from acute alcohol withdrawal usually includes a tapering regimen of a benzodiazepine such as lorazepam (Ativan). Benzodiazepine use is usually limited, however, by the potential for medication abuse and side effects such as central nervous system impairment. Because studies have demonstrated that carbamazepine can be effective for the treatment of alcohol withdrawal symptoms, this study compared the effectiveness of carbamazepine with that of lorazepam.
POPULATION STUDIED: The 136 patients were self-referred and fulfilled Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV) criteria for alcohol dependence and alcohol withdrawal. Patients lived within 50 miles of the study site, and had an admission blood alcohol level < 0.1 g/dL, a Mini Mental State Examination score of 26, and an admission score on the Clinical Institute Withdrawal Assessment-Alcohol, revised (CIWA-Ar) 10 out of a possible score of 20. Patients were excluded if they had substance abuse syndromes other than alcohol dependence, nicotine dependence, or cannabis abuse; major Axis I psychiatric disorder; used benzodiazepines, beta-blockers, calcium channel blockers, or antipsychotic agents within the past 30 days; a history of head injury; neurologic illness; or grossly abnormal laboratory values.
STUDY DESIGN AND VALIDITY: This was a randomized double-blind trial comparing 2 different treatments for alcohol withdrawal. Allocation to treatment group was concealed from enrolling physicians. The patients received a 5-day taper of either lorazepam 6–8 mg tapered to 2 mg or carbamazepine 600–800 mg tapered to 200 mg. Withdrawal symptoms were measured using a validated CIWA-Ar tool. Patients also completed a daily drinking log to assess alcohol use prior to, during, and 7 days after study completion. The study evaluated 89 patients after the treatment period for number of drinks taken per day.
OUTCOMES MEASURED: Alcohol withdrawal symptoms and posttreatment alcohol use measured by the CIWA-Ar scale were the primary outcomes. Side effects were reported as a secondary outcome.
RESULTS: Both drugs were equally effective in reducing alcohol withdrawal symptoms. Over time, alcohol withdrawal symptoms were more likely to occur with lorazepam treatment (P = .007). After treatment, relapsing patients receiving carbamazepine had fewer drinks per day than those receiving lorazepam (1 vs 3; P = .003). Effectiveness varied based on whether patients had attempted alcohol detoxification in the past. Of the patients who reported prior multiple detoxifications, those receiving carbamazepine drank less than 1 drink per day as compared with 5 drinks per day in the lorazepam-treated group (P = .004). The overall frequency of side effects were the same for both groups; however, clinicians recorded dizziness and incoordination in more patients on lorazepam than carbamazepine (22.7% vs 6.9%; P = .02). Pruritus occurred more often in the carbamazepine group than the lorazepam group (18.9% vs 1.3%; P = .004).
Carbamazepine is an effective alternative to benzodiazepines for the outpatient treatment of alcoholic withdrawal symptoms. Carbamazepine appears to be particularly effective for patients in whom detoxification failed in the past.
Homeopathy ineffective for asthma
ABSTRACT
BACKGROUND: Many individuals with asthma are allergic to house dust mites. The incidence and severity of asthma is increasing. More people are seeking complementary medical care, including homeopathy. Homeopathy attempts to mitigate disease by diluting the treatment without diluting the effect.
POPULATION STUDIED: The investigators recruited 1000 asthmatic outpatients from 38 general practices in Hampshire and Dorset, England. Of these, 327 tested positive for house dust mite allergy. Eighty-five patients were excluded for asthma that was either too mild or too well-controlled. Thus 242 subjects between 18 and 55 years old were randomized into the study. This group included both sexes; no note was made of race.
STUDY DESIGN AND VALIDITY: A double-blind, randomized control design was used. A French manufacturer of homeopathic products prepared the active agent by making 30 sequential 1:100 dilutions of a house dust mite allergen (this “ultramolecular” is a highly diluted solution of allergen molecules). After a 4-week period to assess baseline symptoms, subjects were randomized to receive either an oral homeopathic immunotherapy preparation or a similarly prepared placebo in 3 doses over 24 hours. They were then followed for 16 weeks with 3 clinic visits and every-other-week symptom diaries.
OUTCOMES MEASURED: Primary outcomes were change in lung function as measured by forced expiratory volume in 1 second (FEV1) and quality of life as measured by proportion of symptom-free days in each 7-day diary period. Other outcomes included peak expiratory flow, scores for asthma visual analogue scale, and average mood scores.
RESULTS: This homeopathic therapy showed no significant improvement over placebo with regard to FEV1 (0.136 L/sec active agent vs 0.414 L/sec placebo, 95% confidence interval [CI] =0.136–0.693) or mean improvement in quality of life (0.090 active agent vs 0.117 placebo, 95% CI = –.096 to .0150). Neither was there any significant difference in any of the secondary outcomes. These results were independent of the subjects’ belief in complementary medicine. Interestingly, at different times during the study improvement was noted in both the active therapy and placebo groups in FEV1, quality of life, and mood.
This oral homeopathic immunotherapy neither decreased symptoms nor improved lung function over placebo in treatment of house dust mite allergy in asthmatic individuals. Based on this well-done trial, this therapy cannot be recommended for such patients. Because this was a placebo trial and showed no benefit, homeopathic immunotherapy should not be substituted for other efficacious pharmacological agents in the treatment of asthma.
ABSTRACT
BACKGROUND: Many individuals with asthma are allergic to house dust mites. The incidence and severity of asthma is increasing. More people are seeking complementary medical care, including homeopathy. Homeopathy attempts to mitigate disease by diluting the treatment without diluting the effect.
POPULATION STUDIED: The investigators recruited 1000 asthmatic outpatients from 38 general practices in Hampshire and Dorset, England. Of these, 327 tested positive for house dust mite allergy. Eighty-five patients were excluded for asthma that was either too mild or too well-controlled. Thus 242 subjects between 18 and 55 years old were randomized into the study. This group included both sexes; no note was made of race.
STUDY DESIGN AND VALIDITY: A double-blind, randomized control design was used. A French manufacturer of homeopathic products prepared the active agent by making 30 sequential 1:100 dilutions of a house dust mite allergen (this “ultramolecular” is a highly diluted solution of allergen molecules). After a 4-week period to assess baseline symptoms, subjects were randomized to receive either an oral homeopathic immunotherapy preparation or a similarly prepared placebo in 3 doses over 24 hours. They were then followed for 16 weeks with 3 clinic visits and every-other-week symptom diaries.
OUTCOMES MEASURED: Primary outcomes were change in lung function as measured by forced expiratory volume in 1 second (FEV1) and quality of life as measured by proportion of symptom-free days in each 7-day diary period. Other outcomes included peak expiratory flow, scores for asthma visual analogue scale, and average mood scores.
RESULTS: This homeopathic therapy showed no significant improvement over placebo with regard to FEV1 (0.136 L/sec active agent vs 0.414 L/sec placebo, 95% confidence interval [CI] =0.136–0.693) or mean improvement in quality of life (0.090 active agent vs 0.117 placebo, 95% CI = –.096 to .0150). Neither was there any significant difference in any of the secondary outcomes. These results were independent of the subjects’ belief in complementary medicine. Interestingly, at different times during the study improvement was noted in both the active therapy and placebo groups in FEV1, quality of life, and mood.
This oral homeopathic immunotherapy neither decreased symptoms nor improved lung function over placebo in treatment of house dust mite allergy in asthmatic individuals. Based on this well-done trial, this therapy cannot be recommended for such patients. Because this was a placebo trial and showed no benefit, homeopathic immunotherapy should not be substituted for other efficacious pharmacological agents in the treatment of asthma.
ABSTRACT
BACKGROUND: Many individuals with asthma are allergic to house dust mites. The incidence and severity of asthma is increasing. More people are seeking complementary medical care, including homeopathy. Homeopathy attempts to mitigate disease by diluting the treatment without diluting the effect.
POPULATION STUDIED: The investigators recruited 1000 asthmatic outpatients from 38 general practices in Hampshire and Dorset, England. Of these, 327 tested positive for house dust mite allergy. Eighty-five patients were excluded for asthma that was either too mild or too well-controlled. Thus 242 subjects between 18 and 55 years old were randomized into the study. This group included both sexes; no note was made of race.
STUDY DESIGN AND VALIDITY: A double-blind, randomized control design was used. A French manufacturer of homeopathic products prepared the active agent by making 30 sequential 1:100 dilutions of a house dust mite allergen (this “ultramolecular” is a highly diluted solution of allergen molecules). After a 4-week period to assess baseline symptoms, subjects were randomized to receive either an oral homeopathic immunotherapy preparation or a similarly prepared placebo in 3 doses over 24 hours. They were then followed for 16 weeks with 3 clinic visits and every-other-week symptom diaries.
OUTCOMES MEASURED: Primary outcomes were change in lung function as measured by forced expiratory volume in 1 second (FEV1) and quality of life as measured by proportion of symptom-free days in each 7-day diary period. Other outcomes included peak expiratory flow, scores for asthma visual analogue scale, and average mood scores.
RESULTS: This homeopathic therapy showed no significant improvement over placebo with regard to FEV1 (0.136 L/sec active agent vs 0.414 L/sec placebo, 95% confidence interval [CI] =0.136–0.693) or mean improvement in quality of life (0.090 active agent vs 0.117 placebo, 95% CI = –.096 to .0150). Neither was there any significant difference in any of the secondary outcomes. These results were independent of the subjects’ belief in complementary medicine. Interestingly, at different times during the study improvement was noted in both the active therapy and placebo groups in FEV1, quality of life, and mood.
This oral homeopathic immunotherapy neither decreased symptoms nor improved lung function over placebo in treatment of house dust mite allergy in asthmatic individuals. Based on this well-done trial, this therapy cannot be recommended for such patients. Because this was a placebo trial and showed no benefit, homeopathic immunotherapy should not be substituted for other efficacious pharmacological agents in the treatment of asthma.
Hemoccult tests are insensitive for upper gastrointestinal cancer
BACKGROUND: Fecal occult blood (Hemoccult) screening followed by colonoscopy has been shown to reduce colon cancer mortality, but uncertainty remains about the utility of upper endoscopy in further evaluation of patients with positive Hemoccult testing. This study addressed the risk of upper gastrointestinal cancer in patients whose Hemoccult test results are positive.
POPULATION STUDIED: The researchers used a cohort of 61,933 people aged 45 to 75 years in a defined region of Denmark who were followed from 1985 through 2000. They excluded patients with known colorectal neoplasia and distant metastases. The results from this population are likely to apply to the usual US family practice, although the researchers did not provide information about age distribution, dietary habits, alcohol or tobacco history, cancer history, or ethnicity, factors that may influence the development of upper gastrointestinal cancers.
STUDY DESIGN AND VALIDITY: Subjects were drawn from the screening arm of a population-based randomized trial of colon cancer screening. A total of 30,967 patients were offered the screening. After education about diet and medications, subjects were given nonrehydrated fecal occult blood tests biennially. Patients with positive Hemoccult tests were interviewed and examined, and underwent colonoscopy or double-contrast enema; those with carcinoma and/or adenoma were enrolled in a surveillance program. Upper endoscopy and other studies were performed only if warranted by symptoms. The county databases, supplemented by death certificates, the Danish National Register of Patients, and the National Cancer Register, were used to obtain information about malignant disease. Upper gastrointestinal cancers were defined as cancer of the esophagus, stomach, small intestine, and biliary and pancreatic systems. The sensitivity and positive predictive values of Hemoccult testing were calculated using all upper gastrointestinal cancers developing within 2 years.
OUTCOMES MEASURED: The primary outcomes were the sensitivity and positive predictive value of the Hemoccult test with respect to upper gastrointestinal cancer. Cost, patient and physician satisfaction, and impact on quality of life were not addressed.
RESULTS: From 1985 through 2000, 473 patients were diagnosed with upper gastrointestinal cancer in the overall study population, 199 of whom had upper gastrointestinal cancer diagnosed within 2 years of a negative fecal occult test. The sensitivity and positive predictive value of fecal occult blood for upper gastrointestinal cancers diagnosed within 2 years of a positive test were 4.8% and 0.57%, respectively. The presence of symptoms or anemia did not improve the performance of fecal occult blood as a screening test for upper gastrointestinal cancers.
This study provides good evidence that patients with positive fecal occult blood testing have a low risk of upper gastrointestinal cancer. Clinicians should not routinely perform upper endoscopy to screen for cancer in patients whose Hemoccult test is positive. The presence of symptoms or anemia does not improve the performance of fecal occult blood as a diagnostic test, but clinicians should continue to evaluate symptoms carefully and order additional studies accordingly.
BACKGROUND: Fecal occult blood (Hemoccult) screening followed by colonoscopy has been shown to reduce colon cancer mortality, but uncertainty remains about the utility of upper endoscopy in further evaluation of patients with positive Hemoccult testing. This study addressed the risk of upper gastrointestinal cancer in patients whose Hemoccult test results are positive.
POPULATION STUDIED: The researchers used a cohort of 61,933 people aged 45 to 75 years in a defined region of Denmark who were followed from 1985 through 2000. They excluded patients with known colorectal neoplasia and distant metastases. The results from this population are likely to apply to the usual US family practice, although the researchers did not provide information about age distribution, dietary habits, alcohol or tobacco history, cancer history, or ethnicity, factors that may influence the development of upper gastrointestinal cancers.
STUDY DESIGN AND VALIDITY: Subjects were drawn from the screening arm of a population-based randomized trial of colon cancer screening. A total of 30,967 patients were offered the screening. After education about diet and medications, subjects were given nonrehydrated fecal occult blood tests biennially. Patients with positive Hemoccult tests were interviewed and examined, and underwent colonoscopy or double-contrast enema; those with carcinoma and/or adenoma were enrolled in a surveillance program. Upper endoscopy and other studies were performed only if warranted by symptoms. The county databases, supplemented by death certificates, the Danish National Register of Patients, and the National Cancer Register, were used to obtain information about malignant disease. Upper gastrointestinal cancers were defined as cancer of the esophagus, stomach, small intestine, and biliary and pancreatic systems. The sensitivity and positive predictive values of Hemoccult testing were calculated using all upper gastrointestinal cancers developing within 2 years.
OUTCOMES MEASURED: The primary outcomes were the sensitivity and positive predictive value of the Hemoccult test with respect to upper gastrointestinal cancer. Cost, patient and physician satisfaction, and impact on quality of life were not addressed.
RESULTS: From 1985 through 2000, 473 patients were diagnosed with upper gastrointestinal cancer in the overall study population, 199 of whom had upper gastrointestinal cancer diagnosed within 2 years of a negative fecal occult test. The sensitivity and positive predictive value of fecal occult blood for upper gastrointestinal cancers diagnosed within 2 years of a positive test were 4.8% and 0.57%, respectively. The presence of symptoms or anemia did not improve the performance of fecal occult blood as a screening test for upper gastrointestinal cancers.
This study provides good evidence that patients with positive fecal occult blood testing have a low risk of upper gastrointestinal cancer. Clinicians should not routinely perform upper endoscopy to screen for cancer in patients whose Hemoccult test is positive. The presence of symptoms or anemia does not improve the performance of fecal occult blood as a diagnostic test, but clinicians should continue to evaluate symptoms carefully and order additional studies accordingly.
BACKGROUND: Fecal occult blood (Hemoccult) screening followed by colonoscopy has been shown to reduce colon cancer mortality, but uncertainty remains about the utility of upper endoscopy in further evaluation of patients with positive Hemoccult testing. This study addressed the risk of upper gastrointestinal cancer in patients whose Hemoccult test results are positive.
POPULATION STUDIED: The researchers used a cohort of 61,933 people aged 45 to 75 years in a defined region of Denmark who were followed from 1985 through 2000. They excluded patients with known colorectal neoplasia and distant metastases. The results from this population are likely to apply to the usual US family practice, although the researchers did not provide information about age distribution, dietary habits, alcohol or tobacco history, cancer history, or ethnicity, factors that may influence the development of upper gastrointestinal cancers.
STUDY DESIGN AND VALIDITY: Subjects were drawn from the screening arm of a population-based randomized trial of colon cancer screening. A total of 30,967 patients were offered the screening. After education about diet and medications, subjects were given nonrehydrated fecal occult blood tests biennially. Patients with positive Hemoccult tests were interviewed and examined, and underwent colonoscopy or double-contrast enema; those with carcinoma and/or adenoma were enrolled in a surveillance program. Upper endoscopy and other studies were performed only if warranted by symptoms. The county databases, supplemented by death certificates, the Danish National Register of Patients, and the National Cancer Register, were used to obtain information about malignant disease. Upper gastrointestinal cancers were defined as cancer of the esophagus, stomach, small intestine, and biliary and pancreatic systems. The sensitivity and positive predictive values of Hemoccult testing were calculated using all upper gastrointestinal cancers developing within 2 years.
OUTCOMES MEASURED: The primary outcomes were the sensitivity and positive predictive value of the Hemoccult test with respect to upper gastrointestinal cancer. Cost, patient and physician satisfaction, and impact on quality of life were not addressed.
RESULTS: From 1985 through 2000, 473 patients were diagnosed with upper gastrointestinal cancer in the overall study population, 199 of whom had upper gastrointestinal cancer diagnosed within 2 years of a negative fecal occult test. The sensitivity and positive predictive value of fecal occult blood for upper gastrointestinal cancers diagnosed within 2 years of a positive test were 4.8% and 0.57%, respectively. The presence of symptoms or anemia did not improve the performance of fecal occult blood as a screening test for upper gastrointestinal cancers.
This study provides good evidence that patients with positive fecal occult blood testing have a low risk of upper gastrointestinal cancer. Clinicians should not routinely perform upper endoscopy to screen for cancer in patients whose Hemoccult test is positive. The presence of symptoms or anemia does not improve the performance of fecal occult blood as a diagnostic test, but clinicians should continue to evaluate symptoms carefully and order additional studies accordingly.
Caution necessary when interpreting results of outpatient endometrial sampling
ABSTRACT
BACKGROUND: Outpatient endometrial sampling in symptomatic women with abnormal vaginal bleeding is a common practice in primary care. Results from existing studies evaluating various outpatient office-based endometrial sampling techniques are inconsistent.
POPULATION STUDIED: The goal of this systematic quantitative review of the published literature was to determine the accuracy of outpatient endometrial biopsy in detecting endometrial cancer. The authors searched general bibliographic databases (MEDLINE and EMBASE) without language restrictions from 1980 through 1999 for articles comparing outpatient endometrial biopsy results with a reference (gold) standard (most commonly dilation and curettage, hysterectomy, or guided biopsy). Of 1369 trials initially screened, only 11 that were either prospective observational or comparative cross-sectional studies met the inclusion criteria. These 11 trials enrolled a total of 1013 pre- and postmenopausal women with abnormal uterine bleeding; postmenopausal women represented nearly 80% of the study subjects. No additional patient information was reported. The prevalence of endometrial cancer in the study population was 6.3%.
STUDY DESIGN AND VALIDITY: The small number and poor quality of the existing studies significantly limited this analysis. Two authors independently reviewed the studies for inclusion, and disagreement was resolved by consensus or arbitration by a third reviewer. Prospective and consecutive recruitment of eligible women were considered adequate for inclusion, whereas convenience sampling was considered inadequate. Blinding was considered adequate if the pathologists providing gold standard histological diagnoses were unaware of the results of the outpatient biopsy and inadequate if they were aware of the results. A decision to perform a reference test only in response to the results of an outpatient biopsy was considered inadequate.
OUTCOMES MEASURED: The primary outcome measure was the accuracy with which endometrial cancer was diagnosed by the various sampling techniques. Secondary outcomes were device failures and rates of inadequate specimens.
RESULTS: The pooled likelihood ratios for endometrial cancer using the Pipelle outpatient device with adequate endometrial sampling were 64.6 (95% confidence interval [CI], 22.3–187.1) for positive results and 0.1 (95% CI, 0.04–0.28) for negative results. The posttest probability given the initial prevalence of 6.3% with a positive outpatient test was 81.3% (95% CI, 52.4–94.4) and decreased to 0.7% (95% CI, 0.2–2.4) for a negative test. Inadequate samples were considered as negative results, which increased the accuracy. The overall failure rate (inability to perform the procedure for one reason or another) for outpatient biopsy was 7%.
Caution is necessary when using office-based endometrial sampling techniques, including the Pipelle, to evaluate women with abnormal uterine bleeding. An abnormal histological finding is highly accurate and likely to represent true disease. Negative results, including inadequate sampling, must be interpreted with caution, because the false-negative rate for excluding endometrial cancer reported in this analysis was 4/1000 women sampled. Many clinicians and their patients may find this false-negative rate clinically unacceptable, while others may find reassurance from a “low-risk” assessment. In cases of abnormal uterine bleeding in which symptoms persist despite a negative biopsy, further evaluation and input from individual patients is recommended.
ABSTRACT
BACKGROUND: Outpatient endometrial sampling in symptomatic women with abnormal vaginal bleeding is a common practice in primary care. Results from existing studies evaluating various outpatient office-based endometrial sampling techniques are inconsistent.
POPULATION STUDIED: The goal of this systematic quantitative review of the published literature was to determine the accuracy of outpatient endometrial biopsy in detecting endometrial cancer. The authors searched general bibliographic databases (MEDLINE and EMBASE) without language restrictions from 1980 through 1999 for articles comparing outpatient endometrial biopsy results with a reference (gold) standard (most commonly dilation and curettage, hysterectomy, or guided biopsy). Of 1369 trials initially screened, only 11 that were either prospective observational or comparative cross-sectional studies met the inclusion criteria. These 11 trials enrolled a total of 1013 pre- and postmenopausal women with abnormal uterine bleeding; postmenopausal women represented nearly 80% of the study subjects. No additional patient information was reported. The prevalence of endometrial cancer in the study population was 6.3%.
STUDY DESIGN AND VALIDITY: The small number and poor quality of the existing studies significantly limited this analysis. Two authors independently reviewed the studies for inclusion, and disagreement was resolved by consensus or arbitration by a third reviewer. Prospective and consecutive recruitment of eligible women were considered adequate for inclusion, whereas convenience sampling was considered inadequate. Blinding was considered adequate if the pathologists providing gold standard histological diagnoses were unaware of the results of the outpatient biopsy and inadequate if they were aware of the results. A decision to perform a reference test only in response to the results of an outpatient biopsy was considered inadequate.
OUTCOMES MEASURED: The primary outcome measure was the accuracy with which endometrial cancer was diagnosed by the various sampling techniques. Secondary outcomes were device failures and rates of inadequate specimens.
RESULTS: The pooled likelihood ratios for endometrial cancer using the Pipelle outpatient device with adequate endometrial sampling were 64.6 (95% confidence interval [CI], 22.3–187.1) for positive results and 0.1 (95% CI, 0.04–0.28) for negative results. The posttest probability given the initial prevalence of 6.3% with a positive outpatient test was 81.3% (95% CI, 52.4–94.4) and decreased to 0.7% (95% CI, 0.2–2.4) for a negative test. Inadequate samples were considered as negative results, which increased the accuracy. The overall failure rate (inability to perform the procedure for one reason or another) for outpatient biopsy was 7%.
Caution is necessary when using office-based endometrial sampling techniques, including the Pipelle, to evaluate women with abnormal uterine bleeding. An abnormal histological finding is highly accurate and likely to represent true disease. Negative results, including inadequate sampling, must be interpreted with caution, because the false-negative rate for excluding endometrial cancer reported in this analysis was 4/1000 women sampled. Many clinicians and their patients may find this false-negative rate clinically unacceptable, while others may find reassurance from a “low-risk” assessment. In cases of abnormal uterine bleeding in which symptoms persist despite a negative biopsy, further evaluation and input from individual patients is recommended.
ABSTRACT
BACKGROUND: Outpatient endometrial sampling in symptomatic women with abnormal vaginal bleeding is a common practice in primary care. Results from existing studies evaluating various outpatient office-based endometrial sampling techniques are inconsistent.
POPULATION STUDIED: The goal of this systematic quantitative review of the published literature was to determine the accuracy of outpatient endometrial biopsy in detecting endometrial cancer. The authors searched general bibliographic databases (MEDLINE and EMBASE) without language restrictions from 1980 through 1999 for articles comparing outpatient endometrial biopsy results with a reference (gold) standard (most commonly dilation and curettage, hysterectomy, or guided biopsy). Of 1369 trials initially screened, only 11 that were either prospective observational or comparative cross-sectional studies met the inclusion criteria. These 11 trials enrolled a total of 1013 pre- and postmenopausal women with abnormal uterine bleeding; postmenopausal women represented nearly 80% of the study subjects. No additional patient information was reported. The prevalence of endometrial cancer in the study population was 6.3%.
STUDY DESIGN AND VALIDITY: The small number and poor quality of the existing studies significantly limited this analysis. Two authors independently reviewed the studies for inclusion, and disagreement was resolved by consensus or arbitration by a third reviewer. Prospective and consecutive recruitment of eligible women were considered adequate for inclusion, whereas convenience sampling was considered inadequate. Blinding was considered adequate if the pathologists providing gold standard histological diagnoses were unaware of the results of the outpatient biopsy and inadequate if they were aware of the results. A decision to perform a reference test only in response to the results of an outpatient biopsy was considered inadequate.
OUTCOMES MEASURED: The primary outcome measure was the accuracy with which endometrial cancer was diagnosed by the various sampling techniques. Secondary outcomes were device failures and rates of inadequate specimens.
RESULTS: The pooled likelihood ratios for endometrial cancer using the Pipelle outpatient device with adequate endometrial sampling were 64.6 (95% confidence interval [CI], 22.3–187.1) for positive results and 0.1 (95% CI, 0.04–0.28) for negative results. The posttest probability given the initial prevalence of 6.3% with a positive outpatient test was 81.3% (95% CI, 52.4–94.4) and decreased to 0.7% (95% CI, 0.2–2.4) for a negative test. Inadequate samples were considered as negative results, which increased the accuracy. The overall failure rate (inability to perform the procedure for one reason or another) for outpatient biopsy was 7%.
Caution is necessary when using office-based endometrial sampling techniques, including the Pipelle, to evaluate women with abnormal uterine bleeding. An abnormal histological finding is highly accurate and likely to represent true disease. Negative results, including inadequate sampling, must be interpreted with caution, because the false-negative rate for excluding endometrial cancer reported in this analysis was 4/1000 women sampled. Many clinicians and their patients may find this false-negative rate clinically unacceptable, while others may find reassurance from a “low-risk” assessment. In cases of abnormal uterine bleeding in which symptoms persist despite a negative biopsy, further evaluation and input from individual patients is recommended.
Losartan more effective than atenolol in hypertension with left ventricular hypertrophy
ABSTRACT
BACKGROUND: Left ventricular hypertrophy may be responsible for the higher risk of cardiovascular events that hypertensive patients suffer even after blood pressure reduction. Because angiotensin II is associated with the development of left ventricular hypertrophy, selective blockade of angiotensin II may reverse the hypertrophy and lead to decreased cardiovascular morbidity beyond just lowering blood pressure.
POPULATION STUDIED: A total of 9193 adults, aged 55 to 80 years, with hypertension (previously treated or untreated) and electrocardiographic (ECG) evidence of left ventricular hypertrophy were enrolled in the trial. Study participants were from Northern Europe and the United States; 54% were female and 92% were white. Patients with secondary hypertension, heart failure or left ventricular ejection fraction of 40% or less, history of myocardial infarction (MI) or stroke within the last 6 months, or angina pectoris requiring beta-blockers or calcium channel blockers were excluded. Also excluded were patients with disorders that required treatment with losartan or other angiotensin II type 1-receptor blockers, atenolol or other beta-blockers, hydrochlorothiazide, or angiotensin-converting enzyme (ACE) inhibitors.
STUDY DESIGN AND VALIDITY: After a run-in period with placebo, 9222 patients were randomized in a double-blind fashion to receive either losartan (50 mg daily) or atenolol (50 mg daily). Of these, 29 patients were excluded prior to group assignment and the remaining 9193 were included in an intention-to-treat analysis. The authors did not specifically state whether the treatment allocation process was concealed. In addition to either losartan or atenolol, patients were treated with hydrochlorothiazide and other antihypertensive medications as needed to obtain a blood pressure goal of less than 140/90 mm Hg. An independent clinical committee blinded to treatment group assignment determined the validity of all cardiovascular end points.
OUTCOMES MEASURED: The primary end point was cardiovascular morbidity and death, a composite end point consisting of stroke, MI, or cardiovascular death. The authors also measured individual cardiovascular events (stroke, MI, death) separately. Extensive data on blood pressure, use of additional medications, changes in ECG evidence of left ventricular hypertrophy, and adverse events were also compared.
RESULTS: Treatment groups had similar demographics, including baseline vital signs, ECG findings, cardiovascular risk scores, and mean arterial blood pressure on treatment. Patients in the losartan group had a significantly lower relative risk (RR) of the composite end point (stroke, MI, or cardiovascular death; RR = 0.87; 95% confidence interval [CI], 0.77–0.98; numbers needed to treat [NNT] = 244 patients per year). On individual outcomes, patients in the losartan group had a reduced risk of stroke (RR = 0.75; 95% CI, 0.63–0.89), but no statistically significant reduction in cardiovascular mortality (RR = 0.89; 95% CI, 0.73–.07), MI (RR = 1.07; 95% CI, 0.88–1.31) or all-cause mortality (RR = 0.90; 95% CI, 0.78–1.03).
Losartan may reduce cardiovascular morbidity and related deaths in hypertensive patients with documented left ventricular hypertrophy beyond that expected from only lowering blood pressure, especially through a reduction in stroke risk. However, this benefit was small in a select group of patients and no additional reduction was demonstrated in all-cause mortality compared with less expensive atenolol. The benefit of losartan over atenolol was more pronounced in a separate trial of hypertensive diabetic patients with left ventricular hypertrophy (NNT = 122 patients per year).1 Losartan was previously shown to be inferior to an ACE inhibitor agent (captopril) in the treatment of heart failure.2 Thus, there is no reason to believe that the benefit of losartan shown in this study is superior to (and may actually be less than) that of less expensive ACE inhibitors.
ABSTRACT
BACKGROUND: Left ventricular hypertrophy may be responsible for the higher risk of cardiovascular events that hypertensive patients suffer even after blood pressure reduction. Because angiotensin II is associated with the development of left ventricular hypertrophy, selective blockade of angiotensin II may reverse the hypertrophy and lead to decreased cardiovascular morbidity beyond just lowering blood pressure.
POPULATION STUDIED: A total of 9193 adults, aged 55 to 80 years, with hypertension (previously treated or untreated) and electrocardiographic (ECG) evidence of left ventricular hypertrophy were enrolled in the trial. Study participants were from Northern Europe and the United States; 54% were female and 92% were white. Patients with secondary hypertension, heart failure or left ventricular ejection fraction of 40% or less, history of myocardial infarction (MI) or stroke within the last 6 months, or angina pectoris requiring beta-blockers or calcium channel blockers were excluded. Also excluded were patients with disorders that required treatment with losartan or other angiotensin II type 1-receptor blockers, atenolol or other beta-blockers, hydrochlorothiazide, or angiotensin-converting enzyme (ACE) inhibitors.
STUDY DESIGN AND VALIDITY: After a run-in period with placebo, 9222 patients were randomized in a double-blind fashion to receive either losartan (50 mg daily) or atenolol (50 mg daily). Of these, 29 patients were excluded prior to group assignment and the remaining 9193 were included in an intention-to-treat analysis. The authors did not specifically state whether the treatment allocation process was concealed. In addition to either losartan or atenolol, patients were treated with hydrochlorothiazide and other antihypertensive medications as needed to obtain a blood pressure goal of less than 140/90 mm Hg. An independent clinical committee blinded to treatment group assignment determined the validity of all cardiovascular end points.
OUTCOMES MEASURED: The primary end point was cardiovascular morbidity and death, a composite end point consisting of stroke, MI, or cardiovascular death. The authors also measured individual cardiovascular events (stroke, MI, death) separately. Extensive data on blood pressure, use of additional medications, changes in ECG evidence of left ventricular hypertrophy, and adverse events were also compared.
RESULTS: Treatment groups had similar demographics, including baseline vital signs, ECG findings, cardiovascular risk scores, and mean arterial blood pressure on treatment. Patients in the losartan group had a significantly lower relative risk (RR) of the composite end point (stroke, MI, or cardiovascular death; RR = 0.87; 95% confidence interval [CI], 0.77–0.98; numbers needed to treat [NNT] = 244 patients per year). On individual outcomes, patients in the losartan group had a reduced risk of stroke (RR = 0.75; 95% CI, 0.63–0.89), but no statistically significant reduction in cardiovascular mortality (RR = 0.89; 95% CI, 0.73–.07), MI (RR = 1.07; 95% CI, 0.88–1.31) or all-cause mortality (RR = 0.90; 95% CI, 0.78–1.03).
Losartan may reduce cardiovascular morbidity and related deaths in hypertensive patients with documented left ventricular hypertrophy beyond that expected from only lowering blood pressure, especially through a reduction in stroke risk. However, this benefit was small in a select group of patients and no additional reduction was demonstrated in all-cause mortality compared with less expensive atenolol. The benefit of losartan over atenolol was more pronounced in a separate trial of hypertensive diabetic patients with left ventricular hypertrophy (NNT = 122 patients per year).1 Losartan was previously shown to be inferior to an ACE inhibitor agent (captopril) in the treatment of heart failure.2 Thus, there is no reason to believe that the benefit of losartan shown in this study is superior to (and may actually be less than) that of less expensive ACE inhibitors.
ABSTRACT
BACKGROUND: Left ventricular hypertrophy may be responsible for the higher risk of cardiovascular events that hypertensive patients suffer even after blood pressure reduction. Because angiotensin II is associated with the development of left ventricular hypertrophy, selective blockade of angiotensin II may reverse the hypertrophy and lead to decreased cardiovascular morbidity beyond just lowering blood pressure.
POPULATION STUDIED: A total of 9193 adults, aged 55 to 80 years, with hypertension (previously treated or untreated) and electrocardiographic (ECG) evidence of left ventricular hypertrophy were enrolled in the trial. Study participants were from Northern Europe and the United States; 54% were female and 92% were white. Patients with secondary hypertension, heart failure or left ventricular ejection fraction of 40% or less, history of myocardial infarction (MI) or stroke within the last 6 months, or angina pectoris requiring beta-blockers or calcium channel blockers were excluded. Also excluded were patients with disorders that required treatment with losartan or other angiotensin II type 1-receptor blockers, atenolol or other beta-blockers, hydrochlorothiazide, or angiotensin-converting enzyme (ACE) inhibitors.
STUDY DESIGN AND VALIDITY: After a run-in period with placebo, 9222 patients were randomized in a double-blind fashion to receive either losartan (50 mg daily) or atenolol (50 mg daily). Of these, 29 patients were excluded prior to group assignment and the remaining 9193 were included in an intention-to-treat analysis. The authors did not specifically state whether the treatment allocation process was concealed. In addition to either losartan or atenolol, patients were treated with hydrochlorothiazide and other antihypertensive medications as needed to obtain a blood pressure goal of less than 140/90 mm Hg. An independent clinical committee blinded to treatment group assignment determined the validity of all cardiovascular end points.
OUTCOMES MEASURED: The primary end point was cardiovascular morbidity and death, a composite end point consisting of stroke, MI, or cardiovascular death. The authors also measured individual cardiovascular events (stroke, MI, death) separately. Extensive data on blood pressure, use of additional medications, changes in ECG evidence of left ventricular hypertrophy, and adverse events were also compared.
RESULTS: Treatment groups had similar demographics, including baseline vital signs, ECG findings, cardiovascular risk scores, and mean arterial blood pressure on treatment. Patients in the losartan group had a significantly lower relative risk (RR) of the composite end point (stroke, MI, or cardiovascular death; RR = 0.87; 95% confidence interval [CI], 0.77–0.98; numbers needed to treat [NNT] = 244 patients per year). On individual outcomes, patients in the losartan group had a reduced risk of stroke (RR = 0.75; 95% CI, 0.63–0.89), but no statistically significant reduction in cardiovascular mortality (RR = 0.89; 95% CI, 0.73–.07), MI (RR = 1.07; 95% CI, 0.88–1.31) or all-cause mortality (RR = 0.90; 95% CI, 0.78–1.03).
Losartan may reduce cardiovascular morbidity and related deaths in hypertensive patients with documented left ventricular hypertrophy beyond that expected from only lowering blood pressure, especially through a reduction in stroke risk. However, this benefit was small in a select group of patients and no additional reduction was demonstrated in all-cause mortality compared with less expensive atenolol. The benefit of losartan over atenolol was more pronounced in a separate trial of hypertensive diabetic patients with left ventricular hypertrophy (NNT = 122 patients per year).1 Losartan was previously shown to be inferior to an ACE inhibitor agent (captopril) in the treatment of heart failure.2 Thus, there is no reason to believe that the benefit of losartan shown in this study is superior to (and may actually be less than) that of less expensive ACE inhibitors.
Utility of Factor V Leiden testing for idiopathic venous thromboembolism is unclear
ABSTRACT
BACKGROUND: Factor V Leiden deficiency is associated with an increased risk of initial venous thromboembolism. The prevalence of Factor V Leiden deficiency varies with ethnicity and age of onset of initial venous thromboembolism. If Factor V Leiden deficiency predicts recurrent venous thromboembolism, putting affected patients on extended anticoagulation therapy may be beneficial. This study evaluated the cost effectiveness of testing patients for Factor V Leiden deficiency after initial venous thromboembolism, taking into account costs and complications associated with recurrent venous thromboembolism, compared with ongoing anticoagulation therapy.
POPULATION STUDIED: This study was a decision analysis that assumed a base case of a 35-year-old woman with initial venous thromboembolism. Subpopulations for sensitivity analyses were based on ethnicity, prevalence of Factor V Leiden deficiency, age at onset, precipitating factors for venous thromboembolism, length of therapeutic intervention, and morbidities attributed to anticoagulation. The risk for recurrent venous thromboembolism in patients homozygous for Factor V Leiden deficiency is high; this study focused on heterozygotes, a population whose recurrence risk is less well defined.
STUDY DESIGN AND VALIDITY: This decision analysis used sound methods and the sensitivity analyses were appropriate. It is unclear whether a systematic literature review was performed. A pivotal factor was the assumption of an increased risk of recurrence in patients with Factor V Leiden deficiency as compared with patients without the deficiency. The authors based this assumption on an 8-year study that showed an increased risk of recurrence of 2.4 (95% confidence interval [CI], 1.3–4.5, n = 41). All of the recurrences were detected in the first 3 years. However, other studies, of somewhat shorter duration, demonstrated risk ratios of 1.1 (95% CI, 0.7–1.6, n = 112) within 4 years1 and 0.5 (95% CI, 0.1–1.8, n = 37) within 2 years.2
OUTCOMES MEASURED: The primary outcomes reported were the risk of recurrence of deep venous thrombosis, morbidity associated with therapeutic intervention, and the cost effectiveness of 3 different treatment strategies.
RESULTS: All Factor V Leiden-deficient patients were assumed to have a 7.4% per-year risk of recurrence. Various models were constructed based on the duration of that increased risk. The base case assumed a 0% recurrent deep venous thrombosis risk after 3 years; the modified-base case strategy assumed that patients returned to the population average of 2.3% per year after 3 years; and the constant rate model assumed a continued 7.4% per-year risk of recurrent deep venous thrombosis. In all models, testing and treating for life was most expensive. The base and modified-base models demonstrated testing and treatment for 3 years were the most cost effective. If a patient population has a risk of major hemorrhage of more than 1.9% per year, a low prevalence of Factor V Leiden deficiency, a clear precipitant for venous thromboembolism, or a recurrence risk for venous thromboembolism from Factor V Leiden deficiency of less than 1.9, then testing is not indicated.
If the assumptions made in this study are true, then patients at low risk for long-term anticoagulation should be tested for Factor V Leiden deficiency, and if positive, treated for 3 years, pending longer-term studies. However, studies have not clearly defined an increased risk for recurrent venous thromboembolism in patients with Factor V Leiden deficiency. Until the true relative risk is ascertained, routine screening of patients with initial idiopathic venous thromboembolism for Factor V Leiden deficiency should not be used to determine length of anticoagulation.
ABSTRACT
BACKGROUND: Factor V Leiden deficiency is associated with an increased risk of initial venous thromboembolism. The prevalence of Factor V Leiden deficiency varies with ethnicity and age of onset of initial venous thromboembolism. If Factor V Leiden deficiency predicts recurrent venous thromboembolism, putting affected patients on extended anticoagulation therapy may be beneficial. This study evaluated the cost effectiveness of testing patients for Factor V Leiden deficiency after initial venous thromboembolism, taking into account costs and complications associated with recurrent venous thromboembolism, compared with ongoing anticoagulation therapy.
POPULATION STUDIED: This study was a decision analysis that assumed a base case of a 35-year-old woman with initial venous thromboembolism. Subpopulations for sensitivity analyses were based on ethnicity, prevalence of Factor V Leiden deficiency, age at onset, precipitating factors for venous thromboembolism, length of therapeutic intervention, and morbidities attributed to anticoagulation. The risk for recurrent venous thromboembolism in patients homozygous for Factor V Leiden deficiency is high; this study focused on heterozygotes, a population whose recurrence risk is less well defined.
STUDY DESIGN AND VALIDITY: This decision analysis used sound methods and the sensitivity analyses were appropriate. It is unclear whether a systematic literature review was performed. A pivotal factor was the assumption of an increased risk of recurrence in patients with Factor V Leiden deficiency as compared with patients without the deficiency. The authors based this assumption on an 8-year study that showed an increased risk of recurrence of 2.4 (95% confidence interval [CI], 1.3–4.5, n = 41). All of the recurrences were detected in the first 3 years. However, other studies, of somewhat shorter duration, demonstrated risk ratios of 1.1 (95% CI, 0.7–1.6, n = 112) within 4 years1 and 0.5 (95% CI, 0.1–1.8, n = 37) within 2 years.2
OUTCOMES MEASURED: The primary outcomes reported were the risk of recurrence of deep venous thrombosis, morbidity associated with therapeutic intervention, and the cost effectiveness of 3 different treatment strategies.
RESULTS: All Factor V Leiden-deficient patients were assumed to have a 7.4% per-year risk of recurrence. Various models were constructed based on the duration of that increased risk. The base case assumed a 0% recurrent deep venous thrombosis risk after 3 years; the modified-base case strategy assumed that patients returned to the population average of 2.3% per year after 3 years; and the constant rate model assumed a continued 7.4% per-year risk of recurrent deep venous thrombosis. In all models, testing and treating for life was most expensive. The base and modified-base models demonstrated testing and treatment for 3 years were the most cost effective. If a patient population has a risk of major hemorrhage of more than 1.9% per year, a low prevalence of Factor V Leiden deficiency, a clear precipitant for venous thromboembolism, or a recurrence risk for venous thromboembolism from Factor V Leiden deficiency of less than 1.9, then testing is not indicated.
If the assumptions made in this study are true, then patients at low risk for long-term anticoagulation should be tested for Factor V Leiden deficiency, and if positive, treated for 3 years, pending longer-term studies. However, studies have not clearly defined an increased risk for recurrent venous thromboembolism in patients with Factor V Leiden deficiency. Until the true relative risk is ascertained, routine screening of patients with initial idiopathic venous thromboembolism for Factor V Leiden deficiency should not be used to determine length of anticoagulation.
ABSTRACT
BACKGROUND: Factor V Leiden deficiency is associated with an increased risk of initial venous thromboembolism. The prevalence of Factor V Leiden deficiency varies with ethnicity and age of onset of initial venous thromboembolism. If Factor V Leiden deficiency predicts recurrent venous thromboembolism, putting affected patients on extended anticoagulation therapy may be beneficial. This study evaluated the cost effectiveness of testing patients for Factor V Leiden deficiency after initial venous thromboembolism, taking into account costs and complications associated with recurrent venous thromboembolism, compared with ongoing anticoagulation therapy.
POPULATION STUDIED: This study was a decision analysis that assumed a base case of a 35-year-old woman with initial venous thromboembolism. Subpopulations for sensitivity analyses were based on ethnicity, prevalence of Factor V Leiden deficiency, age at onset, precipitating factors for venous thromboembolism, length of therapeutic intervention, and morbidities attributed to anticoagulation. The risk for recurrent venous thromboembolism in patients homozygous for Factor V Leiden deficiency is high; this study focused on heterozygotes, a population whose recurrence risk is less well defined.
STUDY DESIGN AND VALIDITY: This decision analysis used sound methods and the sensitivity analyses were appropriate. It is unclear whether a systematic literature review was performed. A pivotal factor was the assumption of an increased risk of recurrence in patients with Factor V Leiden deficiency as compared with patients without the deficiency. The authors based this assumption on an 8-year study that showed an increased risk of recurrence of 2.4 (95% confidence interval [CI], 1.3–4.5, n = 41). All of the recurrences were detected in the first 3 years. However, other studies, of somewhat shorter duration, demonstrated risk ratios of 1.1 (95% CI, 0.7–1.6, n = 112) within 4 years1 and 0.5 (95% CI, 0.1–1.8, n = 37) within 2 years.2
OUTCOMES MEASURED: The primary outcomes reported were the risk of recurrence of deep venous thrombosis, morbidity associated with therapeutic intervention, and the cost effectiveness of 3 different treatment strategies.
RESULTS: All Factor V Leiden-deficient patients were assumed to have a 7.4% per-year risk of recurrence. Various models were constructed based on the duration of that increased risk. The base case assumed a 0% recurrent deep venous thrombosis risk after 3 years; the modified-base case strategy assumed that patients returned to the population average of 2.3% per year after 3 years; and the constant rate model assumed a continued 7.4% per-year risk of recurrent deep venous thrombosis. In all models, testing and treating for life was most expensive. The base and modified-base models demonstrated testing and treatment for 3 years were the most cost effective. If a patient population has a risk of major hemorrhage of more than 1.9% per year, a low prevalence of Factor V Leiden deficiency, a clear precipitant for venous thromboembolism, or a recurrence risk for venous thromboembolism from Factor V Leiden deficiency of less than 1.9, then testing is not indicated.
If the assumptions made in this study are true, then patients at low risk for long-term anticoagulation should be tested for Factor V Leiden deficiency, and if positive, treated for 3 years, pending longer-term studies. However, studies have not clearly defined an increased risk for recurrent venous thromboembolism in patients with Factor V Leiden deficiency. Until the true relative risk is ascertained, routine screening of patients with initial idiopathic venous thromboembolism for Factor V Leiden deficiency should not be used to determine length of anticoagulation.
Albuterol via metered-dose inhaler and nebulizer equivalent in adults
ABSTRACT
BACKGROUND: Historically, nebulizers have been preferred over metered-dose inhalers (MDIs) for the treatment of asthma exacerbations, although numerous studies have shown their equivalence. A systematic review of 21 randomized trials supported the equivalence of an MDI with spacer and a nebulizer; the method of albuterol delivery did not affect hospital admission rates, length of stay in the emergency department, or measures of pulmonary function.1 Advantages of MDIs may include lower costs, less excess drug exposure, and easier use for patients and physicians.
POPULATION STUDIED: The study population consisted of all patients older than 18 years who presented to an emergency department over a 2.5-year period with an asthma exacerbation (2342 visits, 1429 patients). Most patients were African American (75.4%). Most were women (58.6%), and the mean age was 35.5 ± 13.5 years.
STUDY DESIGN AND VALIDITY: The study was a large, prospective, unblinded, and nonrandomized trial consisting of 2 phases. For the first 12 months, physicians, using standard National Institues of Health guidelines, began treatment with a nebulizer (913 visits). Then for the next 18 months, physicians began treatment with albuterol delivered via MDI and spacer (1429 visits). The dose was 5 puffs, then 3 to 5 puffs every 20 minutes as needed. At the time of discharge from the emergency department during the MDI phase of the study, patients received a peak flow meter, an MDI and spacer, an inhaled corticosteroid, written materials, and counseling by emergency department nurses.
OUTCOMES MEASURED: The outcomes measured were PEFR, Sao 2, heart and respiratory rates, total albuterol dose, and the more patient-oriented outcomes of rate of hospital admission, relapse rate, time in the emergency department, and costs.
RESULTS: In the MDI phase, post-albuterol PEFR was 11.0% higher (342 L/min vs 308 L/min; P = .001) and change in PEFR was 13.3% higher (127 L/min vs 112 L/min; P = .002). Change in Sao 2 was significant (P = .043), and the total albuterol dose was significantly less in the MDI group (1125 μg vs 6700 μg; P = .001). However, these differences did not result in significantly lower hospital admission rates. Relapse rates were significantly lower at both 14 and 21 days in the MDI phase (6.6% and 10.7% vs 9.6% and 13.5%; P < .01 and P < .05). Patients treated with MDIs spent 6.5% less time in the emergency department (163.6 min vs 175.0 min; P = .007). The difference in visit charges was not significant.
This study is yet another to show that delivery of albuterol by MDI and spacer is as effective as delivery by nebulizer in adults with asthma presenting to the emergency department. Patients treated with an MDI and spacer had greater improvement in peak flow, and hospital admission rates did not differ. This trial was not well designed, but its results echo the many other studies, using tighter methods, that show equivalence.1 Although there may be some patients and practice situations for which the nebulizer is preferred, the MDI and spacer can safely be a first-line treatment much of the time. Incorporating MDI use in the treatment of acute asthma exacerbations may help dispel the misconception of many patients that the nebulizer is a more “powerful” way of treating asthma.
ABSTRACT
BACKGROUND: Historically, nebulizers have been preferred over metered-dose inhalers (MDIs) for the treatment of asthma exacerbations, although numerous studies have shown their equivalence. A systematic review of 21 randomized trials supported the equivalence of an MDI with spacer and a nebulizer; the method of albuterol delivery did not affect hospital admission rates, length of stay in the emergency department, or measures of pulmonary function.1 Advantages of MDIs may include lower costs, less excess drug exposure, and easier use for patients and physicians.
POPULATION STUDIED: The study population consisted of all patients older than 18 years who presented to an emergency department over a 2.5-year period with an asthma exacerbation (2342 visits, 1429 patients). Most patients were African American (75.4%). Most were women (58.6%), and the mean age was 35.5 ± 13.5 years.
STUDY DESIGN AND VALIDITY: The study was a large, prospective, unblinded, and nonrandomized trial consisting of 2 phases. For the first 12 months, physicians, using standard National Institues of Health guidelines, began treatment with a nebulizer (913 visits). Then for the next 18 months, physicians began treatment with albuterol delivered via MDI and spacer (1429 visits). The dose was 5 puffs, then 3 to 5 puffs every 20 minutes as needed. At the time of discharge from the emergency department during the MDI phase of the study, patients received a peak flow meter, an MDI and spacer, an inhaled corticosteroid, written materials, and counseling by emergency department nurses.
OUTCOMES MEASURED: The outcomes measured were PEFR, Sao 2, heart and respiratory rates, total albuterol dose, and the more patient-oriented outcomes of rate of hospital admission, relapse rate, time in the emergency department, and costs.
RESULTS: In the MDI phase, post-albuterol PEFR was 11.0% higher (342 L/min vs 308 L/min; P = .001) and change in PEFR was 13.3% higher (127 L/min vs 112 L/min; P = .002). Change in Sao 2 was significant (P = .043), and the total albuterol dose was significantly less in the MDI group (1125 μg vs 6700 μg; P = .001). However, these differences did not result in significantly lower hospital admission rates. Relapse rates were significantly lower at both 14 and 21 days in the MDI phase (6.6% and 10.7% vs 9.6% and 13.5%; P < .01 and P < .05). Patients treated with MDIs spent 6.5% less time in the emergency department (163.6 min vs 175.0 min; P = .007). The difference in visit charges was not significant.
This study is yet another to show that delivery of albuterol by MDI and spacer is as effective as delivery by nebulizer in adults with asthma presenting to the emergency department. Patients treated with an MDI and spacer had greater improvement in peak flow, and hospital admission rates did not differ. This trial was not well designed, but its results echo the many other studies, using tighter methods, that show equivalence.1 Although there may be some patients and practice situations for which the nebulizer is preferred, the MDI and spacer can safely be a first-line treatment much of the time. Incorporating MDI use in the treatment of acute asthma exacerbations may help dispel the misconception of many patients that the nebulizer is a more “powerful” way of treating asthma.
ABSTRACT
BACKGROUND: Historically, nebulizers have been preferred over metered-dose inhalers (MDIs) for the treatment of asthma exacerbations, although numerous studies have shown their equivalence. A systematic review of 21 randomized trials supported the equivalence of an MDI with spacer and a nebulizer; the method of albuterol delivery did not affect hospital admission rates, length of stay in the emergency department, or measures of pulmonary function.1 Advantages of MDIs may include lower costs, less excess drug exposure, and easier use for patients and physicians.
POPULATION STUDIED: The study population consisted of all patients older than 18 years who presented to an emergency department over a 2.5-year period with an asthma exacerbation (2342 visits, 1429 patients). Most patients were African American (75.4%). Most were women (58.6%), and the mean age was 35.5 ± 13.5 years.
STUDY DESIGN AND VALIDITY: The study was a large, prospective, unblinded, and nonrandomized trial consisting of 2 phases. For the first 12 months, physicians, using standard National Institues of Health guidelines, began treatment with a nebulizer (913 visits). Then for the next 18 months, physicians began treatment with albuterol delivered via MDI and spacer (1429 visits). The dose was 5 puffs, then 3 to 5 puffs every 20 minutes as needed. At the time of discharge from the emergency department during the MDI phase of the study, patients received a peak flow meter, an MDI and spacer, an inhaled corticosteroid, written materials, and counseling by emergency department nurses.
OUTCOMES MEASURED: The outcomes measured were PEFR, Sao 2, heart and respiratory rates, total albuterol dose, and the more patient-oriented outcomes of rate of hospital admission, relapse rate, time in the emergency department, and costs.
RESULTS: In the MDI phase, post-albuterol PEFR was 11.0% higher (342 L/min vs 308 L/min; P = .001) and change in PEFR was 13.3% higher (127 L/min vs 112 L/min; P = .002). Change in Sao 2 was significant (P = .043), and the total albuterol dose was significantly less in the MDI group (1125 μg vs 6700 μg; P = .001). However, these differences did not result in significantly lower hospital admission rates. Relapse rates were significantly lower at both 14 and 21 days in the MDI phase (6.6% and 10.7% vs 9.6% and 13.5%; P < .01 and P < .05). Patients treated with MDIs spent 6.5% less time in the emergency department (163.6 min vs 175.0 min; P = .007). The difference in visit charges was not significant.
This study is yet another to show that delivery of albuterol by MDI and spacer is as effective as delivery by nebulizer in adults with asthma presenting to the emergency department. Patients treated with an MDI and spacer had greater improvement in peak flow, and hospital admission rates did not differ. This trial was not well designed, but its results echo the many other studies, using tighter methods, that show equivalence.1 Although there may be some patients and practice situations for which the nebulizer is preferred, the MDI and spacer can safely be a first-line treatment much of the time. Incorporating MDI use in the treatment of acute asthma exacerbations may help dispel the misconception of many patients that the nebulizer is a more “powerful” way of treating asthma.