User login
What is the optimal protocol for diagnosis of ectopic pregnancy?
BACKGROUND: Ectopic pregnancy is a major cause of morbidity and mortality in women of reproductive age, but uncertainty remains about the best strategy for early diagnosis.
POPULATION STUDIED: This study used a hypothetical cohort of 10,000 women with first trimester pregnancies (positive pregnancy test result) who presented to an inner-city emergency department with abdominal pain or bleeding. Patients with any evidence of intra-abdominal hemorrhage were excluded. Their assumed prevalences of ectopic, intrauterine, and nonviable pregnancies were 9.4%, 61.1%, and 29.9%, respectively, based on a weighted average of 3 studies from inner-city teaching hospitals. The availability of 24-hour endovaginal ultrasound and human chorionic gonadotropin (HCG) testing was assumed. The results derived from this population are likely to be similar to those seen by family physicians, but caution should be exercised in generalizing this information to settings such as office practices or rural emergency departments.
STUDY DESIGN AND VALIDITY: The decision analysis defined 6 diagnostic algorithms: (1) transvaginal ultrasound (US) followed by quantitative HCG; (2) quantitative HCG followed by US; (3) progesterone followed by US and quantitative HCG; (4) progesterone followed by quantitative HCG and US; (5) US followed by repeat US; and (6) clinical examination. In practice, ultrasound and HCG are often done simultaneously, but otherwise the descriptions are appropriate. Strategies involving transabdominal US, serial HCGs, and methotrexate were not included. It was assumed that dilatation and curettage (D&C) and laparoscopy were 100% diagnostic and that US does not mistake intrauterine pregnancy for ectopic pregnancy. One- and two-way sensitivity analysis of test characteristics were performed using values obtained from the literature; prevalence of disease and performance of clinical examination were not included in the analysis.The methodologic strength of this study was fair. This clinical question is well suited to decision analysis; the different options are described well; and the analysis addresses appropriate outcomes, including unnecessarily interrupted pregnancies. The major weakness is a lack of information about the literature search and sensitivity analysis that makes it difficult to assess whether the authors’ underlying assumptions are valid. In addition, the authors assume that the highest priority is to avoid missed ectopic pregnancies. Although this is reasonable, no effort was made to query the literature, other professionals, or patients about what their preferred priorities would be, especially when the large majority of ectopic pregnancies are being detected by most of the strategies.
OUTCOMES MEASURED: The primary outcome was the number of missed ectopic pregnancies per 10,000 women. Secondary outcomes included the number of interrupted intrauterine pregnancies; the number of D&Cs, laparoscopies, USs, blood collections, and admissions; the days until diagnosis; and the hospital charges.
RESULTS: No ectopic pregnancies were missed with strategies that involved only US and HCG. Of those 2 strategies, US as the first step led to the fewest interrupted intrauterine pregnancies (70 vs 122). The progesterone algorithms missed more women with ectopic pregnancies (24) and required more surgeries but had the fewest number of interrupted pregnancies (25 and 39). US followed by US missed no ectopic pregnancies and had the shortest time until diagnosis but had the highest hospital charges. Clinical examination alone missed all ectopic pregnancies (940). Sensitivity analysis confirmed that the strategy of HCG followed by US was the most favorable, provided the sensitivity of the US for diagnosing intrauterine pregnancy was above 93%.
This decision analysis provides fair evidence that transvaginal US followed by quantitative HCG is the optimal strategy for diagnosing ectopic pregnancy. Obtaining HCG before ultrasound also performs very well, but strategies starting with progesterone miss a number of ectopic pregnancies. Clinicians should be cautious, however, about generalizing these results to patient populations with a different prevalence of ectopic pregnancy or with different values regarding the interruption of intrauterine pregnancies, and should look for information that incorporates the diagnostic and therapeutic options available in their settings.
BACKGROUND: Ectopic pregnancy is a major cause of morbidity and mortality in women of reproductive age, but uncertainty remains about the best strategy for early diagnosis.
POPULATION STUDIED: This study used a hypothetical cohort of 10,000 women with first trimester pregnancies (positive pregnancy test result) who presented to an inner-city emergency department with abdominal pain or bleeding. Patients with any evidence of intra-abdominal hemorrhage were excluded. Their assumed prevalences of ectopic, intrauterine, and nonviable pregnancies were 9.4%, 61.1%, and 29.9%, respectively, based on a weighted average of 3 studies from inner-city teaching hospitals. The availability of 24-hour endovaginal ultrasound and human chorionic gonadotropin (HCG) testing was assumed. The results derived from this population are likely to be similar to those seen by family physicians, but caution should be exercised in generalizing this information to settings such as office practices or rural emergency departments.
STUDY DESIGN AND VALIDITY: The decision analysis defined 6 diagnostic algorithms: (1) transvaginal ultrasound (US) followed by quantitative HCG; (2) quantitative HCG followed by US; (3) progesterone followed by US and quantitative HCG; (4) progesterone followed by quantitative HCG and US; (5) US followed by repeat US; and (6) clinical examination. In practice, ultrasound and HCG are often done simultaneously, but otherwise the descriptions are appropriate. Strategies involving transabdominal US, serial HCGs, and methotrexate were not included. It was assumed that dilatation and curettage (D&C) and laparoscopy were 100% diagnostic and that US does not mistake intrauterine pregnancy for ectopic pregnancy. One- and two-way sensitivity analysis of test characteristics were performed using values obtained from the literature; prevalence of disease and performance of clinical examination were not included in the analysis.The methodologic strength of this study was fair. This clinical question is well suited to decision analysis; the different options are described well; and the analysis addresses appropriate outcomes, including unnecessarily interrupted pregnancies. The major weakness is a lack of information about the literature search and sensitivity analysis that makes it difficult to assess whether the authors’ underlying assumptions are valid. In addition, the authors assume that the highest priority is to avoid missed ectopic pregnancies. Although this is reasonable, no effort was made to query the literature, other professionals, or patients about what their preferred priorities would be, especially when the large majority of ectopic pregnancies are being detected by most of the strategies.
OUTCOMES MEASURED: The primary outcome was the number of missed ectopic pregnancies per 10,000 women. Secondary outcomes included the number of interrupted intrauterine pregnancies; the number of D&Cs, laparoscopies, USs, blood collections, and admissions; the days until diagnosis; and the hospital charges.
RESULTS: No ectopic pregnancies were missed with strategies that involved only US and HCG. Of those 2 strategies, US as the first step led to the fewest interrupted intrauterine pregnancies (70 vs 122). The progesterone algorithms missed more women with ectopic pregnancies (24) and required more surgeries but had the fewest number of interrupted pregnancies (25 and 39). US followed by US missed no ectopic pregnancies and had the shortest time until diagnosis but had the highest hospital charges. Clinical examination alone missed all ectopic pregnancies (940). Sensitivity analysis confirmed that the strategy of HCG followed by US was the most favorable, provided the sensitivity of the US for diagnosing intrauterine pregnancy was above 93%.
This decision analysis provides fair evidence that transvaginal US followed by quantitative HCG is the optimal strategy for diagnosing ectopic pregnancy. Obtaining HCG before ultrasound also performs very well, but strategies starting with progesterone miss a number of ectopic pregnancies. Clinicians should be cautious, however, about generalizing these results to patient populations with a different prevalence of ectopic pregnancy or with different values regarding the interruption of intrauterine pregnancies, and should look for information that incorporates the diagnostic and therapeutic options available in their settings.
BACKGROUND: Ectopic pregnancy is a major cause of morbidity and mortality in women of reproductive age, but uncertainty remains about the best strategy for early diagnosis.
POPULATION STUDIED: This study used a hypothetical cohort of 10,000 women with first trimester pregnancies (positive pregnancy test result) who presented to an inner-city emergency department with abdominal pain or bleeding. Patients with any evidence of intra-abdominal hemorrhage were excluded. Their assumed prevalences of ectopic, intrauterine, and nonviable pregnancies were 9.4%, 61.1%, and 29.9%, respectively, based on a weighted average of 3 studies from inner-city teaching hospitals. The availability of 24-hour endovaginal ultrasound and human chorionic gonadotropin (HCG) testing was assumed. The results derived from this population are likely to be similar to those seen by family physicians, but caution should be exercised in generalizing this information to settings such as office practices or rural emergency departments.
STUDY DESIGN AND VALIDITY: The decision analysis defined 6 diagnostic algorithms: (1) transvaginal ultrasound (US) followed by quantitative HCG; (2) quantitative HCG followed by US; (3) progesterone followed by US and quantitative HCG; (4) progesterone followed by quantitative HCG and US; (5) US followed by repeat US; and (6) clinical examination. In practice, ultrasound and HCG are often done simultaneously, but otherwise the descriptions are appropriate. Strategies involving transabdominal US, serial HCGs, and methotrexate were not included. It was assumed that dilatation and curettage (D&C) and laparoscopy were 100% diagnostic and that US does not mistake intrauterine pregnancy for ectopic pregnancy. One- and two-way sensitivity analysis of test characteristics were performed using values obtained from the literature; prevalence of disease and performance of clinical examination were not included in the analysis.The methodologic strength of this study was fair. This clinical question is well suited to decision analysis; the different options are described well; and the analysis addresses appropriate outcomes, including unnecessarily interrupted pregnancies. The major weakness is a lack of information about the literature search and sensitivity analysis that makes it difficult to assess whether the authors’ underlying assumptions are valid. In addition, the authors assume that the highest priority is to avoid missed ectopic pregnancies. Although this is reasonable, no effort was made to query the literature, other professionals, or patients about what their preferred priorities would be, especially when the large majority of ectopic pregnancies are being detected by most of the strategies.
OUTCOMES MEASURED: The primary outcome was the number of missed ectopic pregnancies per 10,000 women. Secondary outcomes included the number of interrupted intrauterine pregnancies; the number of D&Cs, laparoscopies, USs, blood collections, and admissions; the days until diagnosis; and the hospital charges.
RESULTS: No ectopic pregnancies were missed with strategies that involved only US and HCG. Of those 2 strategies, US as the first step led to the fewest interrupted intrauterine pregnancies (70 vs 122). The progesterone algorithms missed more women with ectopic pregnancies (24) and required more surgeries but had the fewest number of interrupted pregnancies (25 and 39). US followed by US missed no ectopic pregnancies and had the shortest time until diagnosis but had the highest hospital charges. Clinical examination alone missed all ectopic pregnancies (940). Sensitivity analysis confirmed that the strategy of HCG followed by US was the most favorable, provided the sensitivity of the US for diagnosing intrauterine pregnancy was above 93%.
This decision analysis provides fair evidence that transvaginal US followed by quantitative HCG is the optimal strategy for diagnosing ectopic pregnancy. Obtaining HCG before ultrasound also performs very well, but strategies starting with progesterone miss a number of ectopic pregnancies. Clinicians should be cautious, however, about generalizing these results to patient populations with a different prevalence of ectopic pregnancy or with different values regarding the interruption of intrauterine pregnancies, and should look for information that incorporates the diagnostic and therapeutic options available in their settings.
Should patients with nonulcer dyspepsia and Helicobacter pylori be treated with antibiotics?
BACKGROUND: Dyspepsia (defined as pain or discomfort centered in the upper abdomen) is a common primary care problem, but optimal management is unclear. Nonulcer dyspepsia is the most common subtype, and controversy exists regarding the role of H pylori in this subtype. This systematic review assesses the efficacy of treatment for H pylori on symptoms of dyspepsia.
POPULATION STUDIED: The authors combined the results of 10 studies of therapy of 1747 patients with nonulcer dyspepsia and documented H pylori infection. The investigators included only randomized controlled trials lasting at least 1 month that used antibiotic or bismuth therapy that was effective against H pylori. No information was given on age, sex, body mass index, smoking, duration of symptoms, referral pattern, diagnostic workup, or comorbidity, making generalizability to the usual family practice setting difficult.
STUDY DESIGN AND VALIDITY: The literature search included MEDLINE (1984-1999, all languages, through 2 databases), gastroenterology meeting abstracts, reference lists from review and other articles, and consultation with manufacturers of H pylori medications, a working consensus panel, and other experts. Information on the study sample, intervention, study design, study duration, quality, and outcomes was obtained by 2 investigators independently; agreement was excellent (k=0.92) for methodologic quality, and differences were resolved by consensus. This systematic review was well done. Its strengths included the thorough search and the quality of the review process. A major weakness is the use of a symptom-based diagnostic category (nonulcer dyspepsia), which recent studies have suggested does not discriminate efficiently between specific diseases. When combined with the absence of information about the subjects, it is difficult to know what clinical population is being treated. Other weaknesses include the lack of information on studies for which only abstracts were found and the lack of assessment of the impact of age, obesity, duration of symptoms before treatment, and other factors that might affect the outcomes measured.
OUTCOMES MEASURED: The primary outcome was improvement of symptoms. A secondary analysis addressed eradication of H pylori and its relationship to symptomatic improvement. The article did not address other outcomes important in primary care of patients with dyspepsia, such as medication cost, medication side effects, or impact on function or quality of life.
RESULTS: Ten studies (7 published studies and 3 abstracts) were found, but only 7 provided information on treatment success at 1 month. The available data did not permit calculation of an effect size for symptoms. Formal testing for heterogeneity of effect was statistically significant (P=.04). The authors did not provide an explanation for this heterogeneity, calling into question the validity of aggregating the data. The absolute difference between treatment and control groups ranged from 5% in favor of the control group to 20% in favor of the treatment group, and only one trial achieved a statistically significant difference. A pooled odds ratio found no significant differences between treatment and control groups; sensitivity analysis addressing methodologic quality, precision of definition of dyspepsia, and duration of follow-up did not change this finding.
This systematic review provides good evidence that specific treatment of H pylori in patients with nonulcer dyspepsia does not improve symptoms. Acid suppression medications (whether over the counter or prescription) are reasonable alternatives and may be less expensive and have fewer side effects. More broadly, these results call into question the value of routine testing for H pylori in patients who do not have ulcers. Clinicians should keep in mind that the classification and diagnostic workup of subtypes of dyspepsia and gastroesophageal reflux disease on the basis of symptoms remains in flux. Alarm signs (weight loss, anemia, age older than 50 years, early satiety, heme-positive stool) suggest the need for endoscopy.
BACKGROUND: Dyspepsia (defined as pain or discomfort centered in the upper abdomen) is a common primary care problem, but optimal management is unclear. Nonulcer dyspepsia is the most common subtype, and controversy exists regarding the role of H pylori in this subtype. This systematic review assesses the efficacy of treatment for H pylori on symptoms of dyspepsia.
POPULATION STUDIED: The authors combined the results of 10 studies of therapy of 1747 patients with nonulcer dyspepsia and documented H pylori infection. The investigators included only randomized controlled trials lasting at least 1 month that used antibiotic or bismuth therapy that was effective against H pylori. No information was given on age, sex, body mass index, smoking, duration of symptoms, referral pattern, diagnostic workup, or comorbidity, making generalizability to the usual family practice setting difficult.
STUDY DESIGN AND VALIDITY: The literature search included MEDLINE (1984-1999, all languages, through 2 databases), gastroenterology meeting abstracts, reference lists from review and other articles, and consultation with manufacturers of H pylori medications, a working consensus panel, and other experts. Information on the study sample, intervention, study design, study duration, quality, and outcomes was obtained by 2 investigators independently; agreement was excellent (k=0.92) for methodologic quality, and differences were resolved by consensus. This systematic review was well done. Its strengths included the thorough search and the quality of the review process. A major weakness is the use of a symptom-based diagnostic category (nonulcer dyspepsia), which recent studies have suggested does not discriminate efficiently between specific diseases. When combined with the absence of information about the subjects, it is difficult to know what clinical population is being treated. Other weaknesses include the lack of information on studies for which only abstracts were found and the lack of assessment of the impact of age, obesity, duration of symptoms before treatment, and other factors that might affect the outcomes measured.
OUTCOMES MEASURED: The primary outcome was improvement of symptoms. A secondary analysis addressed eradication of H pylori and its relationship to symptomatic improvement. The article did not address other outcomes important in primary care of patients with dyspepsia, such as medication cost, medication side effects, or impact on function or quality of life.
RESULTS: Ten studies (7 published studies and 3 abstracts) were found, but only 7 provided information on treatment success at 1 month. The available data did not permit calculation of an effect size for symptoms. Formal testing for heterogeneity of effect was statistically significant (P=.04). The authors did not provide an explanation for this heterogeneity, calling into question the validity of aggregating the data. The absolute difference between treatment and control groups ranged from 5% in favor of the control group to 20% in favor of the treatment group, and only one trial achieved a statistically significant difference. A pooled odds ratio found no significant differences between treatment and control groups; sensitivity analysis addressing methodologic quality, precision of definition of dyspepsia, and duration of follow-up did not change this finding.
This systematic review provides good evidence that specific treatment of H pylori in patients with nonulcer dyspepsia does not improve symptoms. Acid suppression medications (whether over the counter or prescription) are reasonable alternatives and may be less expensive and have fewer side effects. More broadly, these results call into question the value of routine testing for H pylori in patients who do not have ulcers. Clinicians should keep in mind that the classification and diagnostic workup of subtypes of dyspepsia and gastroesophageal reflux disease on the basis of symptoms remains in flux. Alarm signs (weight loss, anemia, age older than 50 years, early satiety, heme-positive stool) suggest the need for endoscopy.
BACKGROUND: Dyspepsia (defined as pain or discomfort centered in the upper abdomen) is a common primary care problem, but optimal management is unclear. Nonulcer dyspepsia is the most common subtype, and controversy exists regarding the role of H pylori in this subtype. This systematic review assesses the efficacy of treatment for H pylori on symptoms of dyspepsia.
POPULATION STUDIED: The authors combined the results of 10 studies of therapy of 1747 patients with nonulcer dyspepsia and documented H pylori infection. The investigators included only randomized controlled trials lasting at least 1 month that used antibiotic or bismuth therapy that was effective against H pylori. No information was given on age, sex, body mass index, smoking, duration of symptoms, referral pattern, diagnostic workup, or comorbidity, making generalizability to the usual family practice setting difficult.
STUDY DESIGN AND VALIDITY: The literature search included MEDLINE (1984-1999, all languages, through 2 databases), gastroenterology meeting abstracts, reference lists from review and other articles, and consultation with manufacturers of H pylori medications, a working consensus panel, and other experts. Information on the study sample, intervention, study design, study duration, quality, and outcomes was obtained by 2 investigators independently; agreement was excellent (k=0.92) for methodologic quality, and differences were resolved by consensus. This systematic review was well done. Its strengths included the thorough search and the quality of the review process. A major weakness is the use of a symptom-based diagnostic category (nonulcer dyspepsia), which recent studies have suggested does not discriminate efficiently between specific diseases. When combined with the absence of information about the subjects, it is difficult to know what clinical population is being treated. Other weaknesses include the lack of information on studies for which only abstracts were found and the lack of assessment of the impact of age, obesity, duration of symptoms before treatment, and other factors that might affect the outcomes measured.
OUTCOMES MEASURED: The primary outcome was improvement of symptoms. A secondary analysis addressed eradication of H pylori and its relationship to symptomatic improvement. The article did not address other outcomes important in primary care of patients with dyspepsia, such as medication cost, medication side effects, or impact on function or quality of life.
RESULTS: Ten studies (7 published studies and 3 abstracts) were found, but only 7 provided information on treatment success at 1 month. The available data did not permit calculation of an effect size for symptoms. Formal testing for heterogeneity of effect was statistically significant (P=.04). The authors did not provide an explanation for this heterogeneity, calling into question the validity of aggregating the data. The absolute difference between treatment and control groups ranged from 5% in favor of the control group to 20% in favor of the treatment group, and only one trial achieved a statistically significant difference. A pooled odds ratio found no significant differences between treatment and control groups; sensitivity analysis addressing methodologic quality, precision of definition of dyspepsia, and duration of follow-up did not change this finding.
This systematic review provides good evidence that specific treatment of H pylori in patients with nonulcer dyspepsia does not improve symptoms. Acid suppression medications (whether over the counter or prescription) are reasonable alternatives and may be less expensive and have fewer side effects. More broadly, these results call into question the value of routine testing for H pylori in patients who do not have ulcers. Clinicians should keep in mind that the classification and diagnostic workup of subtypes of dyspepsia and gastroesophageal reflux disease on the basis of symptoms remains in flux. Alarm signs (weight loss, anemia, age older than 50 years, early satiety, heme-positive stool) suggest the need for endoscopy.
Is once- or twice-a-day enoxaparin as effective as unfractionated heparin for the treatment of venous thromboembolism (VTE)?
BACKGROUND: Several studies have shown that initial treatment with subcutaneous low-molecular-weight heparin is as effective as intravenous unfractionated heparin in preventing the sequelae of VTE. This randomized controlled trial addressed this issue with a large population that included patients with pulmonary embolus (PE).
POPULATION STUDIED: A total of 900 patients from 74 hospitals in 16 countries were enrolled. Subjects were adults with venogram- or ultrasound-proven lower extremity deep venous thrombosis, symptomatic PE with high-probability ventilation-perfusion (V/Q) scan, or a positive angiogram with radiologic evidence of a lower extremity VTE. Exclusion criteria included previous administration of heparin or warfarin for more than 24 hours, risk for hemorrhage, and allergies to the medications or to pork products. The average age was 61 years; 55% were men. Twenty-four percent had had a previous VTE; 16% had cancer; and 32% had a concurrent PE. The study population seems to be similar to that of a family physician., although more information about referral pattern and local diagnostic and therapeutic protocols would be valuable.
STUDY DESIGN AND VALIDITY: This was a randomized unblinded trial. Subjects received 1 of 3 antithrombotic regimens: heparin bolus and continuous infusion to achieve a target activated partial thromboplastin time of 55 to 80 seconds, enoxaparin 1 mg per kg twice daily, or enoxaparin 1.5 mg per kg once daily. These regimens were continued for at least 5 days. Warfarin was started within the first 3 days and was continued for at least 3 months, adjusted to maintain an international normalized ratio between 2.0 and 3.0. All subjects received baseline V/Q scans or angiograms. Although the clinicians were not masked to treatment status, all imaging studies and outcomes were reviewed by committees masked to treatment status. The maker of enoxaparin sponsored and performed the study.
OUTCOMES MEASURED: The major outcomes were recurrence or worsening of VTE or PE, major hemorrhage (ie, >2.0 g/L hemoglobin drop, requiring transfusion of 2 or more units, bleeding requiring intervention, or death), thrombocytopenia, or other side effects. Cost and patient satisfaction were not addressed.
RESULTS: The groups were similar at baseline. There was no significant difference in recurrence or worsening of VTE among patients in the unfractionated heparin group (4.1%), enoxaparin daily group (4.4%), and enoxaparin twice-daily group (2.9%). New PE was rare in all 3 groups (4/900). Regardless of treatment, patients with cancer or symptomatic PE at baseline were at higher risk of subsequent VTE (odds ratio [OR]=3.7; 95% confidence interval [CI], 1.3-11; OR=3.4; 95% CI, 1.55-7.3), respectively. These findings were unchanged when analysis was restricted to subjects for whom all data were available. Similarly, there was no significant difference among the groups in the incidence of major hemorrhage (2.1%, 1.7%, 1.3%, respectively), death (3.1%, 3.7%, 2.2%, respectively), or thrombocytopenia. Adherence to enoxaparin in both groups was better than for unfractionated heparin group (83.6% and 85.3% vs 75.9%, respectively).
This study provides additional evidence that low-molecular-weight heparin is a safe and effective alternative to intravenous unfractionated heparin for initial treatment of acute DVT. For patients for whom outpatient treatment is feasible, enoxaparin is less expensive and is easier to administer than intravenous heparin. These results do not imply that treatment of PE in the outpatient setting is appropriate; likewise, the finding of good results with once-daily dosing is promising but requires confirmation with more rigorous methods. Clinicians should look for studies that address the value of initial overnight hospitalization for VTEs, the impact of the out-of-pocket expense of enoxaparin, and the effectiveness of this intervention in nonclinical trial settings.
BACKGROUND: Several studies have shown that initial treatment with subcutaneous low-molecular-weight heparin is as effective as intravenous unfractionated heparin in preventing the sequelae of VTE. This randomized controlled trial addressed this issue with a large population that included patients with pulmonary embolus (PE).
POPULATION STUDIED: A total of 900 patients from 74 hospitals in 16 countries were enrolled. Subjects were adults with venogram- or ultrasound-proven lower extremity deep venous thrombosis, symptomatic PE with high-probability ventilation-perfusion (V/Q) scan, or a positive angiogram with radiologic evidence of a lower extremity VTE. Exclusion criteria included previous administration of heparin or warfarin for more than 24 hours, risk for hemorrhage, and allergies to the medications or to pork products. The average age was 61 years; 55% were men. Twenty-four percent had had a previous VTE; 16% had cancer; and 32% had a concurrent PE. The study population seems to be similar to that of a family physician., although more information about referral pattern and local diagnostic and therapeutic protocols would be valuable.
STUDY DESIGN AND VALIDITY: This was a randomized unblinded trial. Subjects received 1 of 3 antithrombotic regimens: heparin bolus and continuous infusion to achieve a target activated partial thromboplastin time of 55 to 80 seconds, enoxaparin 1 mg per kg twice daily, or enoxaparin 1.5 mg per kg once daily. These regimens were continued for at least 5 days. Warfarin was started within the first 3 days and was continued for at least 3 months, adjusted to maintain an international normalized ratio between 2.0 and 3.0. All subjects received baseline V/Q scans or angiograms. Although the clinicians were not masked to treatment status, all imaging studies and outcomes were reviewed by committees masked to treatment status. The maker of enoxaparin sponsored and performed the study.
OUTCOMES MEASURED: The major outcomes were recurrence or worsening of VTE or PE, major hemorrhage (ie, >2.0 g/L hemoglobin drop, requiring transfusion of 2 or more units, bleeding requiring intervention, or death), thrombocytopenia, or other side effects. Cost and patient satisfaction were not addressed.
RESULTS: The groups were similar at baseline. There was no significant difference in recurrence or worsening of VTE among patients in the unfractionated heparin group (4.1%), enoxaparin daily group (4.4%), and enoxaparin twice-daily group (2.9%). New PE was rare in all 3 groups (4/900). Regardless of treatment, patients with cancer or symptomatic PE at baseline were at higher risk of subsequent VTE (odds ratio [OR]=3.7; 95% confidence interval [CI], 1.3-11; OR=3.4; 95% CI, 1.55-7.3), respectively. These findings were unchanged when analysis was restricted to subjects for whom all data were available. Similarly, there was no significant difference among the groups in the incidence of major hemorrhage (2.1%, 1.7%, 1.3%, respectively), death (3.1%, 3.7%, 2.2%, respectively), or thrombocytopenia. Adherence to enoxaparin in both groups was better than for unfractionated heparin group (83.6% and 85.3% vs 75.9%, respectively).
This study provides additional evidence that low-molecular-weight heparin is a safe and effective alternative to intravenous unfractionated heparin for initial treatment of acute DVT. For patients for whom outpatient treatment is feasible, enoxaparin is less expensive and is easier to administer than intravenous heparin. These results do not imply that treatment of PE in the outpatient setting is appropriate; likewise, the finding of good results with once-daily dosing is promising but requires confirmation with more rigorous methods. Clinicians should look for studies that address the value of initial overnight hospitalization for VTEs, the impact of the out-of-pocket expense of enoxaparin, and the effectiveness of this intervention in nonclinical trial settings.
BACKGROUND: Several studies have shown that initial treatment with subcutaneous low-molecular-weight heparin is as effective as intravenous unfractionated heparin in preventing the sequelae of VTE. This randomized controlled trial addressed this issue with a large population that included patients with pulmonary embolus (PE).
POPULATION STUDIED: A total of 900 patients from 74 hospitals in 16 countries were enrolled. Subjects were adults with venogram- or ultrasound-proven lower extremity deep venous thrombosis, symptomatic PE with high-probability ventilation-perfusion (V/Q) scan, or a positive angiogram with radiologic evidence of a lower extremity VTE. Exclusion criteria included previous administration of heparin or warfarin for more than 24 hours, risk for hemorrhage, and allergies to the medications or to pork products. The average age was 61 years; 55% were men. Twenty-four percent had had a previous VTE; 16% had cancer; and 32% had a concurrent PE. The study population seems to be similar to that of a family physician., although more information about referral pattern and local diagnostic and therapeutic protocols would be valuable.
STUDY DESIGN AND VALIDITY: This was a randomized unblinded trial. Subjects received 1 of 3 antithrombotic regimens: heparin bolus and continuous infusion to achieve a target activated partial thromboplastin time of 55 to 80 seconds, enoxaparin 1 mg per kg twice daily, or enoxaparin 1.5 mg per kg once daily. These regimens were continued for at least 5 days. Warfarin was started within the first 3 days and was continued for at least 3 months, adjusted to maintain an international normalized ratio between 2.0 and 3.0. All subjects received baseline V/Q scans or angiograms. Although the clinicians were not masked to treatment status, all imaging studies and outcomes were reviewed by committees masked to treatment status. The maker of enoxaparin sponsored and performed the study.
OUTCOMES MEASURED: The major outcomes were recurrence or worsening of VTE or PE, major hemorrhage (ie, >2.0 g/L hemoglobin drop, requiring transfusion of 2 or more units, bleeding requiring intervention, or death), thrombocytopenia, or other side effects. Cost and patient satisfaction were not addressed.
RESULTS: The groups were similar at baseline. There was no significant difference in recurrence or worsening of VTE among patients in the unfractionated heparin group (4.1%), enoxaparin daily group (4.4%), and enoxaparin twice-daily group (2.9%). New PE was rare in all 3 groups (4/900). Regardless of treatment, patients with cancer or symptomatic PE at baseline were at higher risk of subsequent VTE (odds ratio [OR]=3.7; 95% confidence interval [CI], 1.3-11; OR=3.4; 95% CI, 1.55-7.3), respectively. These findings were unchanged when analysis was restricted to subjects for whom all data were available. Similarly, there was no significant difference among the groups in the incidence of major hemorrhage (2.1%, 1.7%, 1.3%, respectively), death (3.1%, 3.7%, 2.2%, respectively), or thrombocytopenia. Adherence to enoxaparin in both groups was better than for unfractionated heparin group (83.6% and 85.3% vs 75.9%, respectively).
This study provides additional evidence that low-molecular-weight heparin is a safe and effective alternative to intravenous unfractionated heparin for initial treatment of acute DVT. For patients for whom outpatient treatment is feasible, enoxaparin is less expensive and is easier to administer than intravenous heparin. These results do not imply that treatment of PE in the outpatient setting is appropriate; likewise, the finding of good results with once-daily dosing is promising but requires confirmation with more rigorous methods. Clinicians should look for studies that address the value of initial overnight hospitalization for VTEs, the impact of the out-of-pocket expense of enoxaparin, and the effectiveness of this intervention in nonclinical trial settings.
Which patients with ulcer- or reflux-like dyspepsia will respond favorably to omeprazole?
BACKGROUND: Dyspepsia is a common primary care problem, but optimal management remains unclear. In the past decade, attempts have been made to distinguish subtypes of ulcer-like (nighttime epigastric pain), reflux-like (heartburn, regurgitation), and dysmotility-like (bloating, distention, flatulence, nausea) dyspepsia, but there is overlap among the categories. Also, 50% to 60% of dyspeptic patients fit no single pattern. The authors of this study attempted to define characteristics of patients with dyspepsia who responded to omeprazole.
POPULATION STUDIED: A total of 471 Danish primary care patients aged 18 to 65 years with ulcer-or gastroesophageal reflux (GERD)-like dyspepsia were enrolled. The protocol excluded patients with dysmotility and untypeable dyspepsia, patients with dysmotility and other types of dyspepsia, and those with a history of peptic ulcer disease, reflux esophagitis, alarm symptoms (weight loss, dysphagia, bloody or black stools, anemia, jaundice), or current nonsteroidal anti-inflammatory drug use. The average age was 42 years, with the patients evenly split by sex and smoking status. Approximately half (45%) had experienced symptoms for less than 1 month. On average the patients were slightly overweight, with an average body mass index of 24.6. Thirty-nine percent had used H2-blockers or antacids in the past month, and none had undergone endoscopy or laboratory investigations. Thus, though the clinical characteristics seem similar to patients presenting to family physicians in the United States, generalizations to all patients with dyspepsia should be cautious.
STUDY DESIGN AND VALIDITY: This study is a re-analysis of data from a randomized controlled trial of omeprazole for dyspepsia to determine a clinical prediction rule regarding which patients with dyspepsia will respond to therapy with omeprazole. Patients received omeprazole 20 mg or placebo daily for 2 weeks and were randomly divided into samples used to develop the decision rule (N=236) and to test the rule (n= 235). Logistic regression was used to identify the relationship between specific symptoms and complete response of symptoms to omeprazole. The authors then developed a rule for predicting which dyspeptic patients respond to omeprazole and validated it in the test sample.
OUTCOMES MEASURED: The major outcome was a list of symptoms associated with therapeutic response; these symptoms were used to develop a clinical decision rule to predict response to omeprazole. Patient satisfaction, adverse effects, and cost were not addressed.
RESULTS: High body mass index, nighttime pain, and recent antacid or H2-blocker use predicted a good response to omeprazole, while nausea predicted a poor response. The prediction rule incorporating these variables identified patients likely to respond to omeprazole (number needed to treat=2.6). Validity testing confirmed these findings.
This study provides good evidence that for patients with ulcer- or reflux-like dyspepsia, those with high body mass index, nighttime pain, absence of nausea, and recent antacid or H2-blocker use are more likely to respond to omeprazole 20 mg daily. Clinicians should keep in mind that the patients described in this study represent uninvestigated patients with ulcer- or GERD-like dyspepsia with new onset pain and that they may not be representative of all patients in primary care with dyspepsia. Despite this limitation, this study represents an important step forward in identifying characteristics of primary care patients with dyspepsia that predict response to therapy.
BACKGROUND: Dyspepsia is a common primary care problem, but optimal management remains unclear. In the past decade, attempts have been made to distinguish subtypes of ulcer-like (nighttime epigastric pain), reflux-like (heartburn, regurgitation), and dysmotility-like (bloating, distention, flatulence, nausea) dyspepsia, but there is overlap among the categories. Also, 50% to 60% of dyspeptic patients fit no single pattern. The authors of this study attempted to define characteristics of patients with dyspepsia who responded to omeprazole.
POPULATION STUDIED: A total of 471 Danish primary care patients aged 18 to 65 years with ulcer-or gastroesophageal reflux (GERD)-like dyspepsia were enrolled. The protocol excluded patients with dysmotility and untypeable dyspepsia, patients with dysmotility and other types of dyspepsia, and those with a history of peptic ulcer disease, reflux esophagitis, alarm symptoms (weight loss, dysphagia, bloody or black stools, anemia, jaundice), or current nonsteroidal anti-inflammatory drug use. The average age was 42 years, with the patients evenly split by sex and smoking status. Approximately half (45%) had experienced symptoms for less than 1 month. On average the patients were slightly overweight, with an average body mass index of 24.6. Thirty-nine percent had used H2-blockers or antacids in the past month, and none had undergone endoscopy or laboratory investigations. Thus, though the clinical characteristics seem similar to patients presenting to family physicians in the United States, generalizations to all patients with dyspepsia should be cautious.
STUDY DESIGN AND VALIDITY: This study is a re-analysis of data from a randomized controlled trial of omeprazole for dyspepsia to determine a clinical prediction rule regarding which patients with dyspepsia will respond to therapy with omeprazole. Patients received omeprazole 20 mg or placebo daily for 2 weeks and were randomly divided into samples used to develop the decision rule (N=236) and to test the rule (n= 235). Logistic regression was used to identify the relationship between specific symptoms and complete response of symptoms to omeprazole. The authors then developed a rule for predicting which dyspeptic patients respond to omeprazole and validated it in the test sample.
OUTCOMES MEASURED: The major outcome was a list of symptoms associated with therapeutic response; these symptoms were used to develop a clinical decision rule to predict response to omeprazole. Patient satisfaction, adverse effects, and cost were not addressed.
RESULTS: High body mass index, nighttime pain, and recent antacid or H2-blocker use predicted a good response to omeprazole, while nausea predicted a poor response. The prediction rule incorporating these variables identified patients likely to respond to omeprazole (number needed to treat=2.6). Validity testing confirmed these findings.
This study provides good evidence that for patients with ulcer- or reflux-like dyspepsia, those with high body mass index, nighttime pain, absence of nausea, and recent antacid or H2-blocker use are more likely to respond to omeprazole 20 mg daily. Clinicians should keep in mind that the patients described in this study represent uninvestigated patients with ulcer- or GERD-like dyspepsia with new onset pain and that they may not be representative of all patients in primary care with dyspepsia. Despite this limitation, this study represents an important step forward in identifying characteristics of primary care patients with dyspepsia that predict response to therapy.
BACKGROUND: Dyspepsia is a common primary care problem, but optimal management remains unclear. In the past decade, attempts have been made to distinguish subtypes of ulcer-like (nighttime epigastric pain), reflux-like (heartburn, regurgitation), and dysmotility-like (bloating, distention, flatulence, nausea) dyspepsia, but there is overlap among the categories. Also, 50% to 60% of dyspeptic patients fit no single pattern. The authors of this study attempted to define characteristics of patients with dyspepsia who responded to omeprazole.
POPULATION STUDIED: A total of 471 Danish primary care patients aged 18 to 65 years with ulcer-or gastroesophageal reflux (GERD)-like dyspepsia were enrolled. The protocol excluded patients with dysmotility and untypeable dyspepsia, patients with dysmotility and other types of dyspepsia, and those with a history of peptic ulcer disease, reflux esophagitis, alarm symptoms (weight loss, dysphagia, bloody or black stools, anemia, jaundice), or current nonsteroidal anti-inflammatory drug use. The average age was 42 years, with the patients evenly split by sex and smoking status. Approximately half (45%) had experienced symptoms for less than 1 month. On average the patients were slightly overweight, with an average body mass index of 24.6. Thirty-nine percent had used H2-blockers or antacids in the past month, and none had undergone endoscopy or laboratory investigations. Thus, though the clinical characteristics seem similar to patients presenting to family physicians in the United States, generalizations to all patients with dyspepsia should be cautious.
STUDY DESIGN AND VALIDITY: This study is a re-analysis of data from a randomized controlled trial of omeprazole for dyspepsia to determine a clinical prediction rule regarding which patients with dyspepsia will respond to therapy with omeprazole. Patients received omeprazole 20 mg or placebo daily for 2 weeks and were randomly divided into samples used to develop the decision rule (N=236) and to test the rule (n= 235). Logistic regression was used to identify the relationship between specific symptoms and complete response of symptoms to omeprazole. The authors then developed a rule for predicting which dyspeptic patients respond to omeprazole and validated it in the test sample.
OUTCOMES MEASURED: The major outcome was a list of symptoms associated with therapeutic response; these symptoms were used to develop a clinical decision rule to predict response to omeprazole. Patient satisfaction, adverse effects, and cost were not addressed.
RESULTS: High body mass index, nighttime pain, and recent antacid or H2-blocker use predicted a good response to omeprazole, while nausea predicted a poor response. The prediction rule incorporating these variables identified patients likely to respond to omeprazole (number needed to treat=2.6). Validity testing confirmed these findings.
This study provides good evidence that for patients with ulcer- or reflux-like dyspepsia, those with high body mass index, nighttime pain, absence of nausea, and recent antacid or H2-blocker use are more likely to respond to omeprazole 20 mg daily. Clinicians should keep in mind that the patients described in this study represent uninvestigated patients with ulcer- or GERD-like dyspepsia with new onset pain and that they may not be representative of all patients in primary care with dyspepsia. Despite this limitation, this study represents an important step forward in identifying characteristics of primary care patients with dyspepsia that predict response to therapy.
Is oral dexamethasone as effective as intramuscular dexamethasone for outpatient management of moderate croup?
BACKGROUND: Recent meta-analyses have concluded that steroids ameliorate croup, but questions remain about the effectiveness of oral dosing.
POPULATION STUDIED: A total of 277 children with moderate croup were enrolled from a pediatric emergency department of an academic medical center. Moderate croup was defined as hoarseness and barking cough associated with retractions or stridor at rest. Children with mild disease—barky cough only without retractions—or with severe croup—with cyanosis, severe retractions, or altered mental status—were excluded. Other exclusions were reactive airway exacerbation, epiglotitis, pneumonia, upper airway anomalies, immunosupression, recent steroids, or symptoms present for more than 48 hours. The mean age was 2 years; 69% were men. Eighty-five percent had the illness for more than 24 hours, and 66% had a fever. Thus, the patients seem similar to those seen in family practice offices, but more information about the referral pattern, socioeconomic status, diagnostic work-up, or clinical status would be very valuable in assessing the generalizability of this trial to nonacademic emergency department settings.
STUDY DESIGN AND VALIDITY: This was a single-blinded randomized controlled study. Patients were randomized to a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) administered either orally or intramuscularly (IM). The oral medication was administrated as a crushed tablet mixed with flavored syrup or jelly. Nurses and parents knew the treatment status; physicians assessing the child after treatment were unaware of the mode of administration. After discharge no routine follow-up appointment was given, but an investigator masked to treatment assignment telephoned caretakers at 48 to 72 hours after treatment to determine unscheduled returns for treatment and the child’s clinical status. The sample size was calculated on the basis of a power of 0.8 to detect a 10% difference of return visits. Student t test and chi squares were used to analyze data.
OUTCOMES MEASURED: The primary outcome was parental report of return for further care after discharge. Unscheduled returns were defined as the subsequent need for additional steroids, racemic epinephrine, and/or hospitalization. A secondary outcome was the caregiver assessment of symptom improvement at 48 to 72 hours. Outcomes important for primary care providers were not measured, such as caretaker satisfaction with treatment; missed school, daycare, or work; or costs for parents or for the hospital.
RESULTS: The groups were similar at the outset. There were no statistically significant differences between patients receiving IM versus oral dexamethasone in unscheduled returns (32% vs 25%, respectively) or unscheduled return failures (8% vs 9%, respectively), and there was no difference in caretaker reports of symptomatic improvement. Only 1 of 138 children in the oral group had emesis. Patients receiving racemic epinephrine at the first visit were more likely to return, regardless of the route of dexamethasone administration.
This study provides evidence that a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) given orally is as effective as injectable administration for the outpatient treatment of moderate croup. Oral dexamethasone given in a syrup or jelly is well tolerated. Clinicians should feel comfortable using either oral or IM dexamethasone to treat patients with moderate croup.
BACKGROUND: Recent meta-analyses have concluded that steroids ameliorate croup, but questions remain about the effectiveness of oral dosing.
POPULATION STUDIED: A total of 277 children with moderate croup were enrolled from a pediatric emergency department of an academic medical center. Moderate croup was defined as hoarseness and barking cough associated with retractions or stridor at rest. Children with mild disease—barky cough only without retractions—or with severe croup—with cyanosis, severe retractions, or altered mental status—were excluded. Other exclusions were reactive airway exacerbation, epiglotitis, pneumonia, upper airway anomalies, immunosupression, recent steroids, or symptoms present for more than 48 hours. The mean age was 2 years; 69% were men. Eighty-five percent had the illness for more than 24 hours, and 66% had a fever. Thus, the patients seem similar to those seen in family practice offices, but more information about the referral pattern, socioeconomic status, diagnostic work-up, or clinical status would be very valuable in assessing the generalizability of this trial to nonacademic emergency department settings.
STUDY DESIGN AND VALIDITY: This was a single-blinded randomized controlled study. Patients were randomized to a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) administered either orally or intramuscularly (IM). The oral medication was administrated as a crushed tablet mixed with flavored syrup or jelly. Nurses and parents knew the treatment status; physicians assessing the child after treatment were unaware of the mode of administration. After discharge no routine follow-up appointment was given, but an investigator masked to treatment assignment telephoned caretakers at 48 to 72 hours after treatment to determine unscheduled returns for treatment and the child’s clinical status. The sample size was calculated on the basis of a power of 0.8 to detect a 10% difference of return visits. Student t test and chi squares were used to analyze data.
OUTCOMES MEASURED: The primary outcome was parental report of return for further care after discharge. Unscheduled returns were defined as the subsequent need for additional steroids, racemic epinephrine, and/or hospitalization. A secondary outcome was the caregiver assessment of symptom improvement at 48 to 72 hours. Outcomes important for primary care providers were not measured, such as caretaker satisfaction with treatment; missed school, daycare, or work; or costs for parents or for the hospital.
RESULTS: The groups were similar at the outset. There were no statistically significant differences between patients receiving IM versus oral dexamethasone in unscheduled returns (32% vs 25%, respectively) or unscheduled return failures (8% vs 9%, respectively), and there was no difference in caretaker reports of symptomatic improvement. Only 1 of 138 children in the oral group had emesis. Patients receiving racemic epinephrine at the first visit were more likely to return, regardless of the route of dexamethasone administration.
This study provides evidence that a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) given orally is as effective as injectable administration for the outpatient treatment of moderate croup. Oral dexamethasone given in a syrup or jelly is well tolerated. Clinicians should feel comfortable using either oral or IM dexamethasone to treat patients with moderate croup.
BACKGROUND: Recent meta-analyses have concluded that steroids ameliorate croup, but questions remain about the effectiveness of oral dosing.
POPULATION STUDIED: A total of 277 children with moderate croup were enrolled from a pediatric emergency department of an academic medical center. Moderate croup was defined as hoarseness and barking cough associated with retractions or stridor at rest. Children with mild disease—barky cough only without retractions—or with severe croup—with cyanosis, severe retractions, or altered mental status—were excluded. Other exclusions were reactive airway exacerbation, epiglotitis, pneumonia, upper airway anomalies, immunosupression, recent steroids, or symptoms present for more than 48 hours. The mean age was 2 years; 69% were men. Eighty-five percent had the illness for more than 24 hours, and 66% had a fever. Thus, the patients seem similar to those seen in family practice offices, but more information about the referral pattern, socioeconomic status, diagnostic work-up, or clinical status would be very valuable in assessing the generalizability of this trial to nonacademic emergency department settings.
STUDY DESIGN AND VALIDITY: This was a single-blinded randomized controlled study. Patients were randomized to a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) administered either orally or intramuscularly (IM). The oral medication was administrated as a crushed tablet mixed with flavored syrup or jelly. Nurses and parents knew the treatment status; physicians assessing the child after treatment were unaware of the mode of administration. After discharge no routine follow-up appointment was given, but an investigator masked to treatment assignment telephoned caretakers at 48 to 72 hours after treatment to determine unscheduled returns for treatment and the child’s clinical status. The sample size was calculated on the basis of a power of 0.8 to detect a 10% difference of return visits. Student t test and chi squares were used to analyze data.
OUTCOMES MEASURED: The primary outcome was parental report of return for further care after discharge. Unscheduled returns were defined as the subsequent need for additional steroids, racemic epinephrine, and/or hospitalization. A secondary outcome was the caregiver assessment of symptom improvement at 48 to 72 hours. Outcomes important for primary care providers were not measured, such as caretaker satisfaction with treatment; missed school, daycare, or work; or costs for parents or for the hospital.
RESULTS: The groups were similar at the outset. There were no statistically significant differences between patients receiving IM versus oral dexamethasone in unscheduled returns (32% vs 25%, respectively) or unscheduled return failures (8% vs 9%, respectively), and there was no difference in caretaker reports of symptomatic improvement. Only 1 of 138 children in the oral group had emesis. Patients receiving racemic epinephrine at the first visit were more likely to return, regardless of the route of dexamethasone administration.
This study provides evidence that a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) given orally is as effective as injectable administration for the outpatient treatment of moderate croup. Oral dexamethasone given in a syrup or jelly is well tolerated. Clinicians should feel comfortable using either oral or IM dexamethasone to treat patients with moderate croup.
Is budesonide or nedocromil superior in the long-term management of mild to moderate asthma in children?
BACKGROUND: It is well accepted that inhaled steroids help control childhood asthma. Questions remain, however, concerning long-term use of these medicines. This study evaluated the long-term outcomes of inhaled budesonide and nedocromil.
POPULATION STUDIED: A total of 1041 children were enrolled at 8 centers. The mean age was 8.9 years. Minorities represented 30% to 35% of the participants. The children had mild to moderate asthma defined as presence of symptoms, use of an inhaled bronchodilator twice or more times weekly, or daily medication for asthma. At baseline, the patients had been hospitalized 30 times per 100 person-years, averaged 10 episode-free days per month, and had pre-bronchodilator forced expiratory volume in 1 second (FEV1) values of 93% predicted. In general, this population seems similar to that of a typical family practice, although more information would be helpful about family income, education, tobacco exposure, and the type of clinical centers involved.
STUDY DESIGN AND VALIDITY: The participants were randomized to receive 200 mg of inhaled budesonide twice daily (n=311), 8 mg of inhaled nedocromil sodium twice daily (n=312), or placebo (n=418), in a single-blind fashion. Inhaled albuterol, oral prednisone, or inhaled beclomethasone was added as needed. Follow-up visits occurred 2 and 4 months after randomization and then at 4-month intervals.
OUTCOMES MEASURED: The primary outcome was the change in FEV1 after a bronchodilator. Secondary outcomes included health services utilization such as hospitalization, symptom severity, airway responsiveness, physical growth, incidence of cataracts, and psychological development. Cost of treatment, side effects, and patient/parent satisfaction were not directly assessed.
RESULTS: The groups were similar at baseline. Children were followed for a mean of 4.3 years. Neither budesonide nor nedocromil significantly improved lung function more than placebo. Hospitalization rates decreased in all groups, but compared with placebo, patients receiving budesonide had significantly fewer hospitalizations (2.5 vs 4.4/100 person-years, or 1.9 hospitalizations prevented for 20 children treated for 5 years; P=.004), visits for urgent care (12 vs 22/100 person-years, or 10 urgent visits prevented for 20 patients treated for 5 years; P <.001), and courses of prednisone (70 courses vs 122/100 person-years, 52 courses prevented for 20 patients treated over 5 years; P <.001). Compared with the placebo group the nedocromil group had no significant difference in hospitalization rates but did have fewer urgent care visits (16 vs 22/100 person-years, or 5 fewer visits prevented for 20 patients treated over 5years; P <.02) and fewer prednisone courses (102 vs 122/100 patient years, or 20 fewer courses for 20 children treated for 5 years; P <.01) versus placebo. Children taking budesonide but not nedocromil recorded significantly fewer symptoms, less frequent use of albuterol, and more episode-free days than those receiving just placebo. Increase in height was significantly less for the budesonide group (22.7 cm vs 23.8 cm, P=.005), although there was no significant difference in overall growth velocity, Tanner stage, or projected final height among the 3 groups at the end of the treatment period.
This study provides good evidence that inhaled budesonide or nedocromil may be given to all children with mild to moderate asthma to improve long-term control with little fear of long-term effects. Parents should be counseled that their child’s growth may be ightly blunted, although the data offer some reassurance that final height will be normal. This study does not address the length of therapy, the management of patients with severe asthma, or the role of combination therapy.
BACKGROUND: It is well accepted that inhaled steroids help control childhood asthma. Questions remain, however, concerning long-term use of these medicines. This study evaluated the long-term outcomes of inhaled budesonide and nedocromil.
POPULATION STUDIED: A total of 1041 children were enrolled at 8 centers. The mean age was 8.9 years. Minorities represented 30% to 35% of the participants. The children had mild to moderate asthma defined as presence of symptoms, use of an inhaled bronchodilator twice or more times weekly, or daily medication for asthma. At baseline, the patients had been hospitalized 30 times per 100 person-years, averaged 10 episode-free days per month, and had pre-bronchodilator forced expiratory volume in 1 second (FEV1) values of 93% predicted. In general, this population seems similar to that of a typical family practice, although more information would be helpful about family income, education, tobacco exposure, and the type of clinical centers involved.
STUDY DESIGN AND VALIDITY: The participants were randomized to receive 200 mg of inhaled budesonide twice daily (n=311), 8 mg of inhaled nedocromil sodium twice daily (n=312), or placebo (n=418), in a single-blind fashion. Inhaled albuterol, oral prednisone, or inhaled beclomethasone was added as needed. Follow-up visits occurred 2 and 4 months after randomization and then at 4-month intervals.
OUTCOMES MEASURED: The primary outcome was the change in FEV1 after a bronchodilator. Secondary outcomes included health services utilization such as hospitalization, symptom severity, airway responsiveness, physical growth, incidence of cataracts, and psychological development. Cost of treatment, side effects, and patient/parent satisfaction were not directly assessed.
RESULTS: The groups were similar at baseline. Children were followed for a mean of 4.3 years. Neither budesonide nor nedocromil significantly improved lung function more than placebo. Hospitalization rates decreased in all groups, but compared with placebo, patients receiving budesonide had significantly fewer hospitalizations (2.5 vs 4.4/100 person-years, or 1.9 hospitalizations prevented for 20 children treated for 5 years; P=.004), visits for urgent care (12 vs 22/100 person-years, or 10 urgent visits prevented for 20 patients treated for 5 years; P <.001), and courses of prednisone (70 courses vs 122/100 person-years, 52 courses prevented for 20 patients treated over 5 years; P <.001). Compared with the placebo group the nedocromil group had no significant difference in hospitalization rates but did have fewer urgent care visits (16 vs 22/100 person-years, or 5 fewer visits prevented for 20 patients treated over 5years; P <.02) and fewer prednisone courses (102 vs 122/100 patient years, or 20 fewer courses for 20 children treated for 5 years; P <.01) versus placebo. Children taking budesonide but not nedocromil recorded significantly fewer symptoms, less frequent use of albuterol, and more episode-free days than those receiving just placebo. Increase in height was significantly less for the budesonide group (22.7 cm vs 23.8 cm, P=.005), although there was no significant difference in overall growth velocity, Tanner stage, or projected final height among the 3 groups at the end of the treatment period.
This study provides good evidence that inhaled budesonide or nedocromil may be given to all children with mild to moderate asthma to improve long-term control with little fear of long-term effects. Parents should be counseled that their child’s growth may be ightly blunted, although the data offer some reassurance that final height will be normal. This study does not address the length of therapy, the management of patients with severe asthma, or the role of combination therapy.
BACKGROUND: It is well accepted that inhaled steroids help control childhood asthma. Questions remain, however, concerning long-term use of these medicines. This study evaluated the long-term outcomes of inhaled budesonide and nedocromil.
POPULATION STUDIED: A total of 1041 children were enrolled at 8 centers. The mean age was 8.9 years. Minorities represented 30% to 35% of the participants. The children had mild to moderate asthma defined as presence of symptoms, use of an inhaled bronchodilator twice or more times weekly, or daily medication for asthma. At baseline, the patients had been hospitalized 30 times per 100 person-years, averaged 10 episode-free days per month, and had pre-bronchodilator forced expiratory volume in 1 second (FEV1) values of 93% predicted. In general, this population seems similar to that of a typical family practice, although more information would be helpful about family income, education, tobacco exposure, and the type of clinical centers involved.
STUDY DESIGN AND VALIDITY: The participants were randomized to receive 200 mg of inhaled budesonide twice daily (n=311), 8 mg of inhaled nedocromil sodium twice daily (n=312), or placebo (n=418), in a single-blind fashion. Inhaled albuterol, oral prednisone, or inhaled beclomethasone was added as needed. Follow-up visits occurred 2 and 4 months after randomization and then at 4-month intervals.
OUTCOMES MEASURED: The primary outcome was the change in FEV1 after a bronchodilator. Secondary outcomes included health services utilization such as hospitalization, symptom severity, airway responsiveness, physical growth, incidence of cataracts, and psychological development. Cost of treatment, side effects, and patient/parent satisfaction were not directly assessed.
RESULTS: The groups were similar at baseline. Children were followed for a mean of 4.3 years. Neither budesonide nor nedocromil significantly improved lung function more than placebo. Hospitalization rates decreased in all groups, but compared with placebo, patients receiving budesonide had significantly fewer hospitalizations (2.5 vs 4.4/100 person-years, or 1.9 hospitalizations prevented for 20 children treated for 5 years; P=.004), visits for urgent care (12 vs 22/100 person-years, or 10 urgent visits prevented for 20 patients treated for 5 years; P <.001), and courses of prednisone (70 courses vs 122/100 person-years, 52 courses prevented for 20 patients treated over 5 years; P <.001). Compared with the placebo group the nedocromil group had no significant difference in hospitalization rates but did have fewer urgent care visits (16 vs 22/100 person-years, or 5 fewer visits prevented for 20 patients treated over 5years; P <.02) and fewer prednisone courses (102 vs 122/100 patient years, or 20 fewer courses for 20 children treated for 5 years; P <.01) versus placebo. Children taking budesonide but not nedocromil recorded significantly fewer symptoms, less frequent use of albuterol, and more episode-free days than those receiving just placebo. Increase in height was significantly less for the budesonide group (22.7 cm vs 23.8 cm, P=.005), although there was no significant difference in overall growth velocity, Tanner stage, or projected final height among the 3 groups at the end of the treatment period.
This study provides good evidence that inhaled budesonide or nedocromil may be given to all children with mild to moderate asthma to improve long-term control with little fear of long-term effects. Parents should be counseled that their child’s growth may be ightly blunted, although the data offer some reassurance that final height will be normal. This study does not address the length of therapy, the management of patients with severe asthma, or the role of combination therapy.
Are there differences by gender in response to pharmacotherapy for depression?
BACKGROUND: Depression is common in primary care, but little is known about gender differences in response to medication. This randomized trial compared men and women in response to selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs).
POPULATION STUDIED: A total of 235 men and 400 women between 21 and 65 years who meet Diagnostic and Statistical Manual of Mental Disorders, third edition, revised criteria for major depression with or without dysthymia were enrolled at 12 centers. The average age for men and women was 43 and 40 years, respectively. Of these, 91% were white; 64% had recurrent depression; 54% were also dysthymic; and 57% were previously treated with pharmacotherapy. Depressive symptoms were severe, averaging 25 on the Hamilton Depression (HAM-D) scale. The study population was probably similar to many patients seen by family physicians but with more severe depressive symptoms and more dysthymia. Caution should be exercised in extrapolating results to patients with mild depression, patients of color, or anyone older than 65 years.
STUDY DESIGN AND VALIDITY: This was a randomized controlled trial (allocation assignment concealed). Subjects were given imipramine or sertraline starting at 50 mg per day. Doses were increased until therapeutic results or intolerable side effects occurred, to an average of approximately 140 mg per day of sertraline and 200 mg per day for imipramine. Response to medication was measured by the HAM-D, Beck, and Clinical Global Severity and Improvement (CGI) scales. Analyses of baseline characteristics and end points were performed using the Mantel-Haenszel chi-square and analysis of variance, adjusted for site, baseline severity, and type of depression. Methodologic strengths include the randomized design, efforts to conceal allocation, and use of both intention-to-treat and efficacy-only analysis. Minor weaknesses included lack of information about treatment groups at baseline, absence of a true placebo, and the lack of control for potentially important confounding factors, such as concurrent psychotherapy, other treatments, and social support.
OUTCOMES MEASURED: Response was defined as a 50% decrease of HAM-D score, a HAM-D score of less than 15, a CGI severity score of less than or equal to 3, or a CGI improvement score of 1 or 2. Compliance, side effects, and withdrawals were also tracked. Outcomes important in primary care that were not addressed include other clinical outcomes (suicide attempts, hospitalization), the use of other modalities (additional medication, electroconvulsive therapy, therapy), functional status, patient satisfaction, and cost of care.
RESULTS: Men and women were somewhat different at baseline, although the authors did not report the characteristics of each treatment subgroup. Women were significantly more likely to have been treated for depression and to have a first-degree relative with depression, while men were more likely to be married and to have coexistent dysthymia. Follow-up was complete. Women responded better to sertraline than imipramine (57% improved vs 46%; P=.03; number needed to treat [NNT]=8), and men responded more to imipramine than to sertraline (62% vs 45%; P=.03; NNT=6). Women taking imipramine were more likely to withdraw from the study than those taking sertraline (26% vs 14%; P=.02; NNH=10), but women completing a course of imipramine still had a significantly poorer response than those taking sertraline. Men and women taking either drug showed no significant sex differences in overall frequency of treatment-emergent adverse events; however, more than 90% of all patients reported adverse effects, and adverse effects were the most common reason for withdrawal from the study. For men, imipramine caused significantly more dry mouth, dizziness, and constipation than sertralin, and a similarly high rate of sexual dysfunction and somnolence. For women, sertraline caused significantly more insomnia and diarrhea and less constipation and dry mouth than imipramine.
This study provides good evidence of gender differences in response to sertraline and imipramine for depression. Clinicians should consider using imipramine as first-line pharmacotherapy for men with depression. Like all tricyclics, imipramine carries a risk of cardiac toxicity in overdose, but clinicians should keep in mind that the risk of suicide is similar for all antidepressants. Sertraline remains a good first choice for women. Further research should address the generalizability of these results to other agents in their class, treatment in the elderly, and the role of cognitive/behavioral therapy for each sex.
BACKGROUND: Depression is common in primary care, but little is known about gender differences in response to medication. This randomized trial compared men and women in response to selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs).
POPULATION STUDIED: A total of 235 men and 400 women between 21 and 65 years who meet Diagnostic and Statistical Manual of Mental Disorders, third edition, revised criteria for major depression with or without dysthymia were enrolled at 12 centers. The average age for men and women was 43 and 40 years, respectively. Of these, 91% were white; 64% had recurrent depression; 54% were also dysthymic; and 57% were previously treated with pharmacotherapy. Depressive symptoms were severe, averaging 25 on the Hamilton Depression (HAM-D) scale. The study population was probably similar to many patients seen by family physicians but with more severe depressive symptoms and more dysthymia. Caution should be exercised in extrapolating results to patients with mild depression, patients of color, or anyone older than 65 years.
STUDY DESIGN AND VALIDITY: This was a randomized controlled trial (allocation assignment concealed). Subjects were given imipramine or sertraline starting at 50 mg per day. Doses were increased until therapeutic results or intolerable side effects occurred, to an average of approximately 140 mg per day of sertraline and 200 mg per day for imipramine. Response to medication was measured by the HAM-D, Beck, and Clinical Global Severity and Improvement (CGI) scales. Analyses of baseline characteristics and end points were performed using the Mantel-Haenszel chi-square and analysis of variance, adjusted for site, baseline severity, and type of depression. Methodologic strengths include the randomized design, efforts to conceal allocation, and use of both intention-to-treat and efficacy-only analysis. Minor weaknesses included lack of information about treatment groups at baseline, absence of a true placebo, and the lack of control for potentially important confounding factors, such as concurrent psychotherapy, other treatments, and social support.
OUTCOMES MEASURED: Response was defined as a 50% decrease of HAM-D score, a HAM-D score of less than 15, a CGI severity score of less than or equal to 3, or a CGI improvement score of 1 or 2. Compliance, side effects, and withdrawals were also tracked. Outcomes important in primary care that were not addressed include other clinical outcomes (suicide attempts, hospitalization), the use of other modalities (additional medication, electroconvulsive therapy, therapy), functional status, patient satisfaction, and cost of care.
RESULTS: Men and women were somewhat different at baseline, although the authors did not report the characteristics of each treatment subgroup. Women were significantly more likely to have been treated for depression and to have a first-degree relative with depression, while men were more likely to be married and to have coexistent dysthymia. Follow-up was complete. Women responded better to sertraline than imipramine (57% improved vs 46%; P=.03; number needed to treat [NNT]=8), and men responded more to imipramine than to sertraline (62% vs 45%; P=.03; NNT=6). Women taking imipramine were more likely to withdraw from the study than those taking sertraline (26% vs 14%; P=.02; NNH=10), but women completing a course of imipramine still had a significantly poorer response than those taking sertraline. Men and women taking either drug showed no significant sex differences in overall frequency of treatment-emergent adverse events; however, more than 90% of all patients reported adverse effects, and adverse effects were the most common reason for withdrawal from the study. For men, imipramine caused significantly more dry mouth, dizziness, and constipation than sertralin, and a similarly high rate of sexual dysfunction and somnolence. For women, sertraline caused significantly more insomnia and diarrhea and less constipation and dry mouth than imipramine.
This study provides good evidence of gender differences in response to sertraline and imipramine for depression. Clinicians should consider using imipramine as first-line pharmacotherapy for men with depression. Like all tricyclics, imipramine carries a risk of cardiac toxicity in overdose, but clinicians should keep in mind that the risk of suicide is similar for all antidepressants. Sertraline remains a good first choice for women. Further research should address the generalizability of these results to other agents in their class, treatment in the elderly, and the role of cognitive/behavioral therapy for each sex.
BACKGROUND: Depression is common in primary care, but little is known about gender differences in response to medication. This randomized trial compared men and women in response to selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs).
POPULATION STUDIED: A total of 235 men and 400 women between 21 and 65 years who meet Diagnostic and Statistical Manual of Mental Disorders, third edition, revised criteria for major depression with or without dysthymia were enrolled at 12 centers. The average age for men and women was 43 and 40 years, respectively. Of these, 91% were white; 64% had recurrent depression; 54% were also dysthymic; and 57% were previously treated with pharmacotherapy. Depressive symptoms were severe, averaging 25 on the Hamilton Depression (HAM-D) scale. The study population was probably similar to many patients seen by family physicians but with more severe depressive symptoms and more dysthymia. Caution should be exercised in extrapolating results to patients with mild depression, patients of color, or anyone older than 65 years.
STUDY DESIGN AND VALIDITY: This was a randomized controlled trial (allocation assignment concealed). Subjects were given imipramine or sertraline starting at 50 mg per day. Doses were increased until therapeutic results or intolerable side effects occurred, to an average of approximately 140 mg per day of sertraline and 200 mg per day for imipramine. Response to medication was measured by the HAM-D, Beck, and Clinical Global Severity and Improvement (CGI) scales. Analyses of baseline characteristics and end points were performed using the Mantel-Haenszel chi-square and analysis of variance, adjusted for site, baseline severity, and type of depression. Methodologic strengths include the randomized design, efforts to conceal allocation, and use of both intention-to-treat and efficacy-only analysis. Minor weaknesses included lack of information about treatment groups at baseline, absence of a true placebo, and the lack of control for potentially important confounding factors, such as concurrent psychotherapy, other treatments, and social support.
OUTCOMES MEASURED: Response was defined as a 50% decrease of HAM-D score, a HAM-D score of less than 15, a CGI severity score of less than or equal to 3, or a CGI improvement score of 1 or 2. Compliance, side effects, and withdrawals were also tracked. Outcomes important in primary care that were not addressed include other clinical outcomes (suicide attempts, hospitalization), the use of other modalities (additional medication, electroconvulsive therapy, therapy), functional status, patient satisfaction, and cost of care.
RESULTS: Men and women were somewhat different at baseline, although the authors did not report the characteristics of each treatment subgroup. Women were significantly more likely to have been treated for depression and to have a first-degree relative with depression, while men were more likely to be married and to have coexistent dysthymia. Follow-up was complete. Women responded better to sertraline than imipramine (57% improved vs 46%; P=.03; number needed to treat [NNT]=8), and men responded more to imipramine than to sertraline (62% vs 45%; P=.03; NNT=6). Women taking imipramine were more likely to withdraw from the study than those taking sertraline (26% vs 14%; P=.02; NNH=10), but women completing a course of imipramine still had a significantly poorer response than those taking sertraline. Men and women taking either drug showed no significant sex differences in overall frequency of treatment-emergent adverse events; however, more than 90% of all patients reported adverse effects, and adverse effects were the most common reason for withdrawal from the study. For men, imipramine caused significantly more dry mouth, dizziness, and constipation than sertralin, and a similarly high rate of sexual dysfunction and somnolence. For women, sertraline caused significantly more insomnia and diarrhea and less constipation and dry mouth than imipramine.
This study provides good evidence of gender differences in response to sertraline and imipramine for depression. Clinicians should consider using imipramine as first-line pharmacotherapy for men with depression. Like all tricyclics, imipramine carries a risk of cardiac toxicity in overdose, but clinicians should keep in mind that the risk of suicide is similar for all antidepressants. Sertraline remains a good first choice for women. Further research should address the generalizability of these results to other agents in their class, treatment in the elderly, and the role of cognitive/behavioral therapy for each sex.
Does delayed pushing reduce difficult deliveries for nulliparous women with epidural analgesia?
BACKGROUND: Epidural analgesia, though effective, can prolong second stage labor and increase midpelvic delivery and maternal and neonatal morbidity. Studies indicate delayed pushing may decrease the need for forceps delivery. The authors of this randomized trial assessed the outcomes of a delayed pushing strategy of labor management.
POPULATION STUDIED: A total of 1862 nulliparous women were enrolled at 12 sites in Canada, Switzerland, and the United States. Enrollment criteria included more than 37 weeks’ gestation, vertex singleton presentation, normal fetal heart status, and effective continuous epidural analgesia. The average age of participants was 28 years, and more than 94% were white or Asian; other risk factors were not described. The high reported episiotomy rate (41%) suggests that the settings encouraged obstetric intervention, and the lack of information about intrapartum routines and other obstetric risk factors makes assessment of comparability difficult.
STUDY DESIGN AND VALIDITY: At complete dilatation the women were randomized (allocation concealment uncertain) to a pushing or delayed pushing group. Pushing in the latter group was discouraged for 2 hours unless there was an irresistible urge to push, the fetal head was at the perineum, or there was a medical indication to hasten delivery. Oxytocin use was standardized. Analysis was by intention to treat with control for potential confounding by the Mantel-Haenszel method.
OUTCOMES MEASURED: The primary outcome was difficult deliveries, defined as second stage cesarean section, midpelvic forceps or vacuum delivery, low pelvic forceps, or rotation of fetal head more than 45 degrees, or as any operative vaginal delivery preceded by manual rotation of the head 45 degrees. Secondary maternal outcomes included lacerations, blood loss, peripartum fever, and blood transfusions, as well as a postpartum survey of the mother’s sense of control during her labor and delivery. Pediatric outcomes included cord pH, Apgar scores, neonatal intensive care unit admission, and a neonatal morbidity index. Patient satisfaction and cost of care were not addressed.
RESULTS: Difficult deliveries were reduced in the delayed pushing group (relative risk=0.79; 95% confidence interval, 0.66-0.95; number needed to treat [NNT]=21). The most pronounced difference was in reduced midpelvic procedures; stratification by oxytocin use or other variables yielded no difference in difficult deliveries. The protective effect of delayed pushing on difficult delivery was greatest for women who had a fetus in a transverse or posterior position (NNT=8) or with a fetal station above +2 (NNT=17). Mothers in the delayed pushing group had a higher rate of intrapartum fever but no significant differences in antibiotic use, postpartum fevers, or any other outcome. The groups were similar in the mother’s reported sense of control. Infants in the delayed pushing group had a higher rate of cord pH <7.10, but there was no significant difference in the neonatal morbidity index scores or in specific adverse outcomes.
Delayed pushing for up to 2 hours after full cervical dilatation in nulliparous women receiving epidural analgesia is safe and may lower the risk of difficult deliveries. This may be especially true in settings with a high rate of routine obstetric interventions. Future studies should include more power for specific adverse pediatric outcomes and should address the generalizability of delayed pushing to patients of color and to a less interventional obstetric milieu. In the meantime, clinicians should allow patients to delay pushing, if close fetal monitoring is in place.
BACKGROUND: Epidural analgesia, though effective, can prolong second stage labor and increase midpelvic delivery and maternal and neonatal morbidity. Studies indicate delayed pushing may decrease the need for forceps delivery. The authors of this randomized trial assessed the outcomes of a delayed pushing strategy of labor management.
POPULATION STUDIED: A total of 1862 nulliparous women were enrolled at 12 sites in Canada, Switzerland, and the United States. Enrollment criteria included more than 37 weeks’ gestation, vertex singleton presentation, normal fetal heart status, and effective continuous epidural analgesia. The average age of participants was 28 years, and more than 94% were white or Asian; other risk factors were not described. The high reported episiotomy rate (41%) suggests that the settings encouraged obstetric intervention, and the lack of information about intrapartum routines and other obstetric risk factors makes assessment of comparability difficult.
STUDY DESIGN AND VALIDITY: At complete dilatation the women were randomized (allocation concealment uncertain) to a pushing or delayed pushing group. Pushing in the latter group was discouraged for 2 hours unless there was an irresistible urge to push, the fetal head was at the perineum, or there was a medical indication to hasten delivery. Oxytocin use was standardized. Analysis was by intention to treat with control for potential confounding by the Mantel-Haenszel method.
OUTCOMES MEASURED: The primary outcome was difficult deliveries, defined as second stage cesarean section, midpelvic forceps or vacuum delivery, low pelvic forceps, or rotation of fetal head more than 45 degrees, or as any operative vaginal delivery preceded by manual rotation of the head 45 degrees. Secondary maternal outcomes included lacerations, blood loss, peripartum fever, and blood transfusions, as well as a postpartum survey of the mother’s sense of control during her labor and delivery. Pediatric outcomes included cord pH, Apgar scores, neonatal intensive care unit admission, and a neonatal morbidity index. Patient satisfaction and cost of care were not addressed.
RESULTS: Difficult deliveries were reduced in the delayed pushing group (relative risk=0.79; 95% confidence interval, 0.66-0.95; number needed to treat [NNT]=21). The most pronounced difference was in reduced midpelvic procedures; stratification by oxytocin use or other variables yielded no difference in difficult deliveries. The protective effect of delayed pushing on difficult delivery was greatest for women who had a fetus in a transverse or posterior position (NNT=8) or with a fetal station above +2 (NNT=17). Mothers in the delayed pushing group had a higher rate of intrapartum fever but no significant differences in antibiotic use, postpartum fevers, or any other outcome. The groups were similar in the mother’s reported sense of control. Infants in the delayed pushing group had a higher rate of cord pH <7.10, but there was no significant difference in the neonatal morbidity index scores or in specific adverse outcomes.
Delayed pushing for up to 2 hours after full cervical dilatation in nulliparous women receiving epidural analgesia is safe and may lower the risk of difficult deliveries. This may be especially true in settings with a high rate of routine obstetric interventions. Future studies should include more power for specific adverse pediatric outcomes and should address the generalizability of delayed pushing to patients of color and to a less interventional obstetric milieu. In the meantime, clinicians should allow patients to delay pushing, if close fetal monitoring is in place.
BACKGROUND: Epidural analgesia, though effective, can prolong second stage labor and increase midpelvic delivery and maternal and neonatal morbidity. Studies indicate delayed pushing may decrease the need for forceps delivery. The authors of this randomized trial assessed the outcomes of a delayed pushing strategy of labor management.
POPULATION STUDIED: A total of 1862 nulliparous women were enrolled at 12 sites in Canada, Switzerland, and the United States. Enrollment criteria included more than 37 weeks’ gestation, vertex singleton presentation, normal fetal heart status, and effective continuous epidural analgesia. The average age of participants was 28 years, and more than 94% were white or Asian; other risk factors were not described. The high reported episiotomy rate (41%) suggests that the settings encouraged obstetric intervention, and the lack of information about intrapartum routines and other obstetric risk factors makes assessment of comparability difficult.
STUDY DESIGN AND VALIDITY: At complete dilatation the women were randomized (allocation concealment uncertain) to a pushing or delayed pushing group. Pushing in the latter group was discouraged for 2 hours unless there was an irresistible urge to push, the fetal head was at the perineum, or there was a medical indication to hasten delivery. Oxytocin use was standardized. Analysis was by intention to treat with control for potential confounding by the Mantel-Haenszel method.
OUTCOMES MEASURED: The primary outcome was difficult deliveries, defined as second stage cesarean section, midpelvic forceps or vacuum delivery, low pelvic forceps, or rotation of fetal head more than 45 degrees, or as any operative vaginal delivery preceded by manual rotation of the head 45 degrees. Secondary maternal outcomes included lacerations, blood loss, peripartum fever, and blood transfusions, as well as a postpartum survey of the mother’s sense of control during her labor and delivery. Pediatric outcomes included cord pH, Apgar scores, neonatal intensive care unit admission, and a neonatal morbidity index. Patient satisfaction and cost of care were not addressed.
RESULTS: Difficult deliveries were reduced in the delayed pushing group (relative risk=0.79; 95% confidence interval, 0.66-0.95; number needed to treat [NNT]=21). The most pronounced difference was in reduced midpelvic procedures; stratification by oxytocin use or other variables yielded no difference in difficult deliveries. The protective effect of delayed pushing on difficult delivery was greatest for women who had a fetus in a transverse or posterior position (NNT=8) or with a fetal station above +2 (NNT=17). Mothers in the delayed pushing group had a higher rate of intrapartum fever but no significant differences in antibiotic use, postpartum fevers, or any other outcome. The groups were similar in the mother’s reported sense of control. Infants in the delayed pushing group had a higher rate of cord pH <7.10, but there was no significant difference in the neonatal morbidity index scores or in specific adverse outcomes.
Delayed pushing for up to 2 hours after full cervical dilatation in nulliparous women receiving epidural analgesia is safe and may lower the risk of difficult deliveries. This may be especially true in settings with a high rate of routine obstetric interventions. Future studies should include more power for specific adverse pediatric outcomes and should address the generalizability of delayed pushing to patients of color and to a less interventional obstetric milieu. In the meantime, clinicians should allow patients to delay pushing, if close fetal monitoring is in place.
Does coffee protect against the development of Parkinson disease (PD)?
BACKGROUND: The suffering of patients with PD is substantial, and understanding risk factors for its development could make prevention or amelioration of its course possible. This prospective cohort study evaluated the effect of coffee and caffeine on the development of PD.
POPULATION STUDIED: A total of 8004 Japanese American men aged between 45 and 68 years were enrolled in the Honolulu Heart Program between 1965 and 1968. The subjects were identified through World War II selective services files; the median age was 53 years at enrollment. Women, other races, and younger men were not represented in this study, so family physicians should be cautious about generalizing the results to those patients.
STUDY DESIGN AND VALIDITY: This prospective cohort study was initiated more than 30 years ago. Caffeine and coffee intake was assessed at enrollment by 24-hour dietary recall, and 6 years later by food frequency questionnaire. Incident cases of PD were identified by review of hospitalization records, death certificates, and local neurologists’ records using well-established published case definitions and after 1991 direct examination of the entire cohort by a study neurologist. Proportional hazards modeling was used to adjust for age and cigarette smoking and to assess potential confounding by alcohol intake, dietary fat, physical activity, cholesterol, hypertension, and diabetes. This study was well done. Its strengths include its prospective design, rigorous assessment of dietary intake, excellent follow-up, careful case ascertainment, and assessment for a variety of confounding variables. Minor weaknesses include the lack of reviewers blind to exposure and the possibility of other confounding variables, such as medical treatments or dietary trace elements.
OUTCOMES MEASURED: The primary outcome was the relative risk of PD for different levels of coffee and caffeine consumption. The authors did not measure the clinical outcomes of PD, such as functional status or quality of life, which might be valuable for primary care physicians who are considering advising substantial lifestyle change to prevent PD.
RESULTS: A total of 8004 subjects (99.9% of the original cohort) were followed for an average of 27 years; 102 cases of PD were identified. The adjusted relative risk of developing PD was 5.1 (95% confidence interval, 1.8-14.4; number needed to treat=125) for noncoffee drinkers compared with those who drank 28 ounces or more of coffee per day. A dose-response relationship was observed; higher amounts of daily coffee intake were associated with lower relative risks of PD. This relationship was also seen with caffeine and caffeine derived from noncoffee sources. Other nutrients found in coffee (ie, niacin) or used in coffee (ie, milk or sugar) were analyzed and found to have no impact on the relationship observed between coffee and PD.
This well-designed prospective study provides good evidence that an inverse relationship exists between coffee intake and the development of PD and suggests that caffeine may be an important mediator of this effect. However, a single study does not prove that coffee is protective. The mechanism by which coffee may protect against PD is not understood, and confounding variables not measured in this study could be responsible for the results obtained. Consistent findings with multiple well-designed studies in a variety of populations are needed before a causal relationship can be established.
Should family physicians advise patients to drink coffee? The prevalence of PD in family practice is significant but still relatively small. The increased rate of PD for patients with more usual amounts of coffee each day (4-28 oz/day) is modest. There may also be health risks of drinking large amounts of coffee per day, although few have been identified in the literature. Clinicians should not counsel patients to drink coffee to protect against PD until more data become available.
BACKGROUND: The suffering of patients with PD is substantial, and understanding risk factors for its development could make prevention or amelioration of its course possible. This prospective cohort study evaluated the effect of coffee and caffeine on the development of PD.
POPULATION STUDIED: A total of 8004 Japanese American men aged between 45 and 68 years were enrolled in the Honolulu Heart Program between 1965 and 1968. The subjects were identified through World War II selective services files; the median age was 53 years at enrollment. Women, other races, and younger men were not represented in this study, so family physicians should be cautious about generalizing the results to those patients.
STUDY DESIGN AND VALIDITY: This prospective cohort study was initiated more than 30 years ago. Caffeine and coffee intake was assessed at enrollment by 24-hour dietary recall, and 6 years later by food frequency questionnaire. Incident cases of PD were identified by review of hospitalization records, death certificates, and local neurologists’ records using well-established published case definitions and after 1991 direct examination of the entire cohort by a study neurologist. Proportional hazards modeling was used to adjust for age and cigarette smoking and to assess potential confounding by alcohol intake, dietary fat, physical activity, cholesterol, hypertension, and diabetes. This study was well done. Its strengths include its prospective design, rigorous assessment of dietary intake, excellent follow-up, careful case ascertainment, and assessment for a variety of confounding variables. Minor weaknesses include the lack of reviewers blind to exposure and the possibility of other confounding variables, such as medical treatments or dietary trace elements.
OUTCOMES MEASURED: The primary outcome was the relative risk of PD for different levels of coffee and caffeine consumption. The authors did not measure the clinical outcomes of PD, such as functional status or quality of life, which might be valuable for primary care physicians who are considering advising substantial lifestyle change to prevent PD.
RESULTS: A total of 8004 subjects (99.9% of the original cohort) were followed for an average of 27 years; 102 cases of PD were identified. The adjusted relative risk of developing PD was 5.1 (95% confidence interval, 1.8-14.4; number needed to treat=125) for noncoffee drinkers compared with those who drank 28 ounces or more of coffee per day. A dose-response relationship was observed; higher amounts of daily coffee intake were associated with lower relative risks of PD. This relationship was also seen with caffeine and caffeine derived from noncoffee sources. Other nutrients found in coffee (ie, niacin) or used in coffee (ie, milk or sugar) were analyzed and found to have no impact on the relationship observed between coffee and PD.
This well-designed prospective study provides good evidence that an inverse relationship exists between coffee intake and the development of PD and suggests that caffeine may be an important mediator of this effect. However, a single study does not prove that coffee is protective. The mechanism by which coffee may protect against PD is not understood, and confounding variables not measured in this study could be responsible for the results obtained. Consistent findings with multiple well-designed studies in a variety of populations are needed before a causal relationship can be established.
Should family physicians advise patients to drink coffee? The prevalence of PD in family practice is significant but still relatively small. The increased rate of PD for patients with more usual amounts of coffee each day (4-28 oz/day) is modest. There may also be health risks of drinking large amounts of coffee per day, although few have been identified in the literature. Clinicians should not counsel patients to drink coffee to protect against PD until more data become available.
BACKGROUND: The suffering of patients with PD is substantial, and understanding risk factors for its development could make prevention or amelioration of its course possible. This prospective cohort study evaluated the effect of coffee and caffeine on the development of PD.
POPULATION STUDIED: A total of 8004 Japanese American men aged between 45 and 68 years were enrolled in the Honolulu Heart Program between 1965 and 1968. The subjects were identified through World War II selective services files; the median age was 53 years at enrollment. Women, other races, and younger men were not represented in this study, so family physicians should be cautious about generalizing the results to those patients.
STUDY DESIGN AND VALIDITY: This prospective cohort study was initiated more than 30 years ago. Caffeine and coffee intake was assessed at enrollment by 24-hour dietary recall, and 6 years later by food frequency questionnaire. Incident cases of PD were identified by review of hospitalization records, death certificates, and local neurologists’ records using well-established published case definitions and after 1991 direct examination of the entire cohort by a study neurologist. Proportional hazards modeling was used to adjust for age and cigarette smoking and to assess potential confounding by alcohol intake, dietary fat, physical activity, cholesterol, hypertension, and diabetes. This study was well done. Its strengths include its prospective design, rigorous assessment of dietary intake, excellent follow-up, careful case ascertainment, and assessment for a variety of confounding variables. Minor weaknesses include the lack of reviewers blind to exposure and the possibility of other confounding variables, such as medical treatments or dietary trace elements.
OUTCOMES MEASURED: The primary outcome was the relative risk of PD for different levels of coffee and caffeine consumption. The authors did not measure the clinical outcomes of PD, such as functional status or quality of life, which might be valuable for primary care physicians who are considering advising substantial lifestyle change to prevent PD.
RESULTS: A total of 8004 subjects (99.9% of the original cohort) were followed for an average of 27 years; 102 cases of PD were identified. The adjusted relative risk of developing PD was 5.1 (95% confidence interval, 1.8-14.4; number needed to treat=125) for noncoffee drinkers compared with those who drank 28 ounces or more of coffee per day. A dose-response relationship was observed; higher amounts of daily coffee intake were associated with lower relative risks of PD. This relationship was also seen with caffeine and caffeine derived from noncoffee sources. Other nutrients found in coffee (ie, niacin) or used in coffee (ie, milk or sugar) were analyzed and found to have no impact on the relationship observed between coffee and PD.
This well-designed prospective study provides good evidence that an inverse relationship exists between coffee intake and the development of PD and suggests that caffeine may be an important mediator of this effect. However, a single study does not prove that coffee is protective. The mechanism by which coffee may protect against PD is not understood, and confounding variables not measured in this study could be responsible for the results obtained. Consistent findings with multiple well-designed studies in a variety of populations are needed before a causal relationship can be established.
Should family physicians advise patients to drink coffee? The prevalence of PD in family practice is significant but still relatively small. The increased rate of PD for patients with more usual amounts of coffee each day (4-28 oz/day) is modest. There may also be health risks of drinking large amounts of coffee per day, although few have been identified in the literature. Clinicians should not counsel patients to drink coffee to protect against PD until more data become available.
How frequently should patients with type 2 diabetes mellitus be screened for retinopathy?
BACKGROUND: Strong evidence supports the efficacy of laser treatment for diabetic retinopathy; however, compliance with screening has been disappointing, and the optimum frequency of screening remains controversial. This cost-utility analysis compares cost, duration of blindness, and quality-adjusted life years (QALYs) for several intervals of screening.
POPULATION STUDIED: A hypothetical cohort of people 40 years and older with diabetes was modeled on the Third National Health and Nutrition Examination Survey. Estimates for retinopathy development and progression were taken from the United Kingdom Prospective Diabetes Study. Thus, the study population is probably similar to that seen by the typical family physician, though caution should be exercised in generalizing results to practices with higher proportions of Hispanics, African Americans, or other ethnic populations for whom the prevalence and rate of progression of retinopathy may be different from the reference populations.
STUDY DESIGN AND VALIDITY: This cost-utility study used a Markhov model to compare the impact of annual versus less frequent screening on the progression of diabetic retinopathy and macular edema. The perspective was that of a payer; costs included an ophthalmology visit, laser photocoagulation, and fluorescein angiogram. Age and hemoglobin (Hb) A1C levels were used to define patient groups at high, medium, and low risk. Predictions for life expectancy were adjusted for time spent blind on the basis of a utility of 0.69. A 3% discount was applied to all costs and years of life. The methodology of this study was strong. The authors defined alternatives explicitly and calculated incremental analysis of cost. Their model addressed accuracy of diagnosis, and they performed thorough univariate and multivariate sensitivity analyses. Weaknesses included the lacks of evidence for effectiveness of population screening for diabetic retinopathy, attention to the rate of poor follow-up, and recognition of the impact of visual impairment without blindness.
OUTCOMES MEASURED: The primary outcomes measured were costs, days of blindness, and QALYs. Costs directly due to blindness, the impact of labeling, and patient satisfaction were not addressed.
RESULTS: The effectiveness of screening is dramatically greater for patients at a higher risk of retinopathy; increasing screening frequency modestly decreases time spent blind. For example, for a low-risk patient (eg, 75 years old with a Hb A1C of 7%) screening every third year prevents 1 day of blindness during the patient’s lifetime, and annual screening prevents an additional 2 days of blindness (compared with no screening). For a high-risk patient (eg, 45 years old with a Hb A1C of 11%), screening every third year prevents 188 days of blindness, and annual screening saves an additional 21 days of blindness. Within any given risk group, the cost of screening increases dramatically with increasing frequency. For example, even for the high-risk patient for whom screening is most cost-effective, the marginal cost-effectiveness of increasing screening from every other year to every year was $181,850 per QALY gained. Extrapolating the results to the whole population and using $50,000 per QALY as a criterion of cost-effectiveness, screening every second year is cost-effective ($49,760/QALY), while annual screening is not ($107,510/QALY). Sensitivity analysis did not change the results substantially, except that lowering the utility score for blindness increased the cost-effectiveness of all screening.
This study provides good evidence that regular screening for retinopathy in high-risk patients with diabetes is cost-effective, but suggests that insurance companies and the developers of the Health Plan Employer Data and Information Set reconsider their insistence on annual screening for all patients as a measure of the quality of care. Clinicians should be cautious about extending this analysis to individual patients, but these findings underscore the clinical importance of the risk of retinopathy and give permission to clinicians to tailor recommendations to individuals on the basis of their risk. The findings are substantially limited by the lack of evidence that population screening for diabetic retinopathy is effective in usual practice.
BACKGROUND: Strong evidence supports the efficacy of laser treatment for diabetic retinopathy; however, compliance with screening has been disappointing, and the optimum frequency of screening remains controversial. This cost-utility analysis compares cost, duration of blindness, and quality-adjusted life years (QALYs) for several intervals of screening.
POPULATION STUDIED: A hypothetical cohort of people 40 years and older with diabetes was modeled on the Third National Health and Nutrition Examination Survey. Estimates for retinopathy development and progression were taken from the United Kingdom Prospective Diabetes Study. Thus, the study population is probably similar to that seen by the typical family physician, though caution should be exercised in generalizing results to practices with higher proportions of Hispanics, African Americans, or other ethnic populations for whom the prevalence and rate of progression of retinopathy may be different from the reference populations.
STUDY DESIGN AND VALIDITY: This cost-utility study used a Markhov model to compare the impact of annual versus less frequent screening on the progression of diabetic retinopathy and macular edema. The perspective was that of a payer; costs included an ophthalmology visit, laser photocoagulation, and fluorescein angiogram. Age and hemoglobin (Hb) A1C levels were used to define patient groups at high, medium, and low risk. Predictions for life expectancy were adjusted for time spent blind on the basis of a utility of 0.69. A 3% discount was applied to all costs and years of life. The methodology of this study was strong. The authors defined alternatives explicitly and calculated incremental analysis of cost. Their model addressed accuracy of diagnosis, and they performed thorough univariate and multivariate sensitivity analyses. Weaknesses included the lacks of evidence for effectiveness of population screening for diabetic retinopathy, attention to the rate of poor follow-up, and recognition of the impact of visual impairment without blindness.
OUTCOMES MEASURED: The primary outcomes measured were costs, days of blindness, and QALYs. Costs directly due to blindness, the impact of labeling, and patient satisfaction were not addressed.
RESULTS: The effectiveness of screening is dramatically greater for patients at a higher risk of retinopathy; increasing screening frequency modestly decreases time spent blind. For example, for a low-risk patient (eg, 75 years old with a Hb A1C of 7%) screening every third year prevents 1 day of blindness during the patient’s lifetime, and annual screening prevents an additional 2 days of blindness (compared with no screening). For a high-risk patient (eg, 45 years old with a Hb A1C of 11%), screening every third year prevents 188 days of blindness, and annual screening saves an additional 21 days of blindness. Within any given risk group, the cost of screening increases dramatically with increasing frequency. For example, even for the high-risk patient for whom screening is most cost-effective, the marginal cost-effectiveness of increasing screening from every other year to every year was $181,850 per QALY gained. Extrapolating the results to the whole population and using $50,000 per QALY as a criterion of cost-effectiveness, screening every second year is cost-effective ($49,760/QALY), while annual screening is not ($107,510/QALY). Sensitivity analysis did not change the results substantially, except that lowering the utility score for blindness increased the cost-effectiveness of all screening.
This study provides good evidence that regular screening for retinopathy in high-risk patients with diabetes is cost-effective, but suggests that insurance companies and the developers of the Health Plan Employer Data and Information Set reconsider their insistence on annual screening for all patients as a measure of the quality of care. Clinicians should be cautious about extending this analysis to individual patients, but these findings underscore the clinical importance of the risk of retinopathy and give permission to clinicians to tailor recommendations to individuals on the basis of their risk. The findings are substantially limited by the lack of evidence that population screening for diabetic retinopathy is effective in usual practice.
BACKGROUND: Strong evidence supports the efficacy of laser treatment for diabetic retinopathy; however, compliance with screening has been disappointing, and the optimum frequency of screening remains controversial. This cost-utility analysis compares cost, duration of blindness, and quality-adjusted life years (QALYs) for several intervals of screening.
POPULATION STUDIED: A hypothetical cohort of people 40 years and older with diabetes was modeled on the Third National Health and Nutrition Examination Survey. Estimates for retinopathy development and progression were taken from the United Kingdom Prospective Diabetes Study. Thus, the study population is probably similar to that seen by the typical family physician, though caution should be exercised in generalizing results to practices with higher proportions of Hispanics, African Americans, or other ethnic populations for whom the prevalence and rate of progression of retinopathy may be different from the reference populations.
STUDY DESIGN AND VALIDITY: This cost-utility study used a Markhov model to compare the impact of annual versus less frequent screening on the progression of diabetic retinopathy and macular edema. The perspective was that of a payer; costs included an ophthalmology visit, laser photocoagulation, and fluorescein angiogram. Age and hemoglobin (Hb) A1C levels were used to define patient groups at high, medium, and low risk. Predictions for life expectancy were adjusted for time spent blind on the basis of a utility of 0.69. A 3% discount was applied to all costs and years of life. The methodology of this study was strong. The authors defined alternatives explicitly and calculated incremental analysis of cost. Their model addressed accuracy of diagnosis, and they performed thorough univariate and multivariate sensitivity analyses. Weaknesses included the lacks of evidence for effectiveness of population screening for diabetic retinopathy, attention to the rate of poor follow-up, and recognition of the impact of visual impairment without blindness.
OUTCOMES MEASURED: The primary outcomes measured were costs, days of blindness, and QALYs. Costs directly due to blindness, the impact of labeling, and patient satisfaction were not addressed.
RESULTS: The effectiveness of screening is dramatically greater for patients at a higher risk of retinopathy; increasing screening frequency modestly decreases time spent blind. For example, for a low-risk patient (eg, 75 years old with a Hb A1C of 7%) screening every third year prevents 1 day of blindness during the patient’s lifetime, and annual screening prevents an additional 2 days of blindness (compared with no screening). For a high-risk patient (eg, 45 years old with a Hb A1C of 11%), screening every third year prevents 188 days of blindness, and annual screening saves an additional 21 days of blindness. Within any given risk group, the cost of screening increases dramatically with increasing frequency. For example, even for the high-risk patient for whom screening is most cost-effective, the marginal cost-effectiveness of increasing screening from every other year to every year was $181,850 per QALY gained. Extrapolating the results to the whole population and using $50,000 per QALY as a criterion of cost-effectiveness, screening every second year is cost-effective ($49,760/QALY), while annual screening is not ($107,510/QALY). Sensitivity analysis did not change the results substantially, except that lowering the utility score for blindness increased the cost-effectiveness of all screening.
This study provides good evidence that regular screening for retinopathy in high-risk patients with diabetes is cost-effective, but suggests that insurance companies and the developers of the Health Plan Employer Data and Information Set reconsider their insistence on annual screening for all patients as a measure of the quality of care. Clinicians should be cautious about extending this analysis to individual patients, but these findings underscore the clinical importance of the risk of retinopathy and give permission to clinicians to tailor recommendations to individuals on the basis of their risk. The findings are substantially limited by the lack of evidence that population screening for diabetic retinopathy is effective in usual practice.