User login
Confronting UTI Antimicrobial Resistance: What’s Up Our Sleeve?
One of the greatest challenges of modern medicine is antimicrobial resistance. Most clinicians exercise discretion when writing antibiotic prescriptions for patients in front of us. However, many of our practices have disease-specific intervention protocols that decrease clinician burden and facilitate patient care, but that put antibiotic prescribing on autopilot. One common protocolized intervention addresses urinary tract infections. Ciprofloxacin has been the UTI workhorse for many of us because it is effective and well-tolerated. However, increasing resistance and calls to put ciprofloxacin “on reserve” have prompted changes in our UTI protocols.
Last week, the Journal of the American Medical Association published a comparative evaluation of cefpodoxime and ciprofloxacin to see if we could expand our UTI treatment armamentarium (JAMA. 2012;307:583-9). Cefpodoxime has broad spectrum antimicrobial activity and would provide a useful alternative to fluoroquinolones for cystitis treatment if demonstrated to be similar in efficacy. In this study, three hundred women with acute cystitis were randomized to cefpodoxime or ciprofloxacin given for 3 days. The primary outcome was clinical cure at 30 days. E. coli caused most cystitis (75%). Overall, 4% of isolates (4% of E. coli and 8% of non−E. coli) were nonsusceptible to ciprofloxacin and 8% (4% of E. coli and 36% of non−E. coli) were nonsusceptible to cefpodoxime. The overall cure rate at 30 days using an intent-to-treat approach in which patients lost to follow-up were considered having had a clinical cure, was 93% (139/150) for ciprofloxacin compared with 82% (123/150) for cefpodoxime (difference of 11%; 95% CI, 3%-18%). The investigators had pre-determined that a greater than 10% lower rate of cure for cefpodoxime was clinically inferior, so they concluded that cefpodoxime did not meet criteria for noninferiority for achieving clinical cure. The authors speculated that cephalosporins probably fail because fewer women achieved eradication of vaginal E. coli colonization with cefpodoxime than with cirprofloxacin.
The Infectious Diseases Society of America suggested that fluoroquinolones be reserved for important uses other than acute cystitis, and thus should be considered alternative antimicrobials for acute cystitis. Nitrofurantoin could be used, but insurance companies currently are mailing “information letters” about the dangers of using this drug in older patients due to concerns about drug accumulation in people with less than optimal renal function. Fosfomycin could be used, but we have had some trouble finding it in our local pharmacies.
The take-home message is that current studies do not support the use of cefpodoxime as a first-line fluoroquinolone-sparing antimicrobial for acute uncomplicated cystitis. More research needs to be conducted on the efficacy of narrow-spectrum cephalosporins to increase our options. Most importantly, we need to be continually reviewing our local resistance patterns, guideline panel recommendations, and the emerging evidence to intelligently update our disease-specific treatment protocols.
Dr. Ebbert reported having no relevant conflicts of interest.
One of the greatest challenges of modern medicine is antimicrobial resistance. Most clinicians exercise discretion when writing antibiotic prescriptions for patients in front of us. However, many of our practices have disease-specific intervention protocols that decrease clinician burden and facilitate patient care, but that put antibiotic prescribing on autopilot. One common protocolized intervention addresses urinary tract infections. Ciprofloxacin has been the UTI workhorse for many of us because it is effective and well-tolerated. However, increasing resistance and calls to put ciprofloxacin “on reserve” have prompted changes in our UTI protocols.
Last week, the Journal of the American Medical Association published a comparative evaluation of cefpodoxime and ciprofloxacin to see if we could expand our UTI treatment armamentarium (JAMA. 2012;307:583-9). Cefpodoxime has broad spectrum antimicrobial activity and would provide a useful alternative to fluoroquinolones for cystitis treatment if demonstrated to be similar in efficacy. In this study, three hundred women with acute cystitis were randomized to cefpodoxime or ciprofloxacin given for 3 days. The primary outcome was clinical cure at 30 days. E. coli caused most cystitis (75%). Overall, 4% of isolates (4% of E. coli and 8% of non−E. coli) were nonsusceptible to ciprofloxacin and 8% (4% of E. coli and 36% of non−E. coli) were nonsusceptible to cefpodoxime. The overall cure rate at 30 days using an intent-to-treat approach in which patients lost to follow-up were considered having had a clinical cure, was 93% (139/150) for ciprofloxacin compared with 82% (123/150) for cefpodoxime (difference of 11%; 95% CI, 3%-18%). The investigators had pre-determined that a greater than 10% lower rate of cure for cefpodoxime was clinically inferior, so they concluded that cefpodoxime did not meet criteria for noninferiority for achieving clinical cure. The authors speculated that cephalosporins probably fail because fewer women achieved eradication of vaginal E. coli colonization with cefpodoxime than with cirprofloxacin.
The Infectious Diseases Society of America suggested that fluoroquinolones be reserved for important uses other than acute cystitis, and thus should be considered alternative antimicrobials for acute cystitis. Nitrofurantoin could be used, but insurance companies currently are mailing “information letters” about the dangers of using this drug in older patients due to concerns about drug accumulation in people with less than optimal renal function. Fosfomycin could be used, but we have had some trouble finding it in our local pharmacies.
The take-home message is that current studies do not support the use of cefpodoxime as a first-line fluoroquinolone-sparing antimicrobial for acute uncomplicated cystitis. More research needs to be conducted on the efficacy of narrow-spectrum cephalosporins to increase our options. Most importantly, we need to be continually reviewing our local resistance patterns, guideline panel recommendations, and the emerging evidence to intelligently update our disease-specific treatment protocols.
Dr. Ebbert reported having no relevant conflicts of interest.
One of the greatest challenges of modern medicine is antimicrobial resistance. Most clinicians exercise discretion when writing antibiotic prescriptions for patients in front of us. However, many of our practices have disease-specific intervention protocols that decrease clinician burden and facilitate patient care, but that put antibiotic prescribing on autopilot. One common protocolized intervention addresses urinary tract infections. Ciprofloxacin has been the UTI workhorse for many of us because it is effective and well-tolerated. However, increasing resistance and calls to put ciprofloxacin “on reserve” have prompted changes in our UTI protocols.
Last week, the Journal of the American Medical Association published a comparative evaluation of cefpodoxime and ciprofloxacin to see if we could expand our UTI treatment armamentarium (JAMA. 2012;307:583-9). Cefpodoxime has broad spectrum antimicrobial activity and would provide a useful alternative to fluoroquinolones for cystitis treatment if demonstrated to be similar in efficacy. In this study, three hundred women with acute cystitis were randomized to cefpodoxime or ciprofloxacin given for 3 days. The primary outcome was clinical cure at 30 days. E. coli caused most cystitis (75%). Overall, 4% of isolates (4% of E. coli and 8% of non−E. coli) were nonsusceptible to ciprofloxacin and 8% (4% of E. coli and 36% of non−E. coli) were nonsusceptible to cefpodoxime. The overall cure rate at 30 days using an intent-to-treat approach in which patients lost to follow-up were considered having had a clinical cure, was 93% (139/150) for ciprofloxacin compared with 82% (123/150) for cefpodoxime (difference of 11%; 95% CI, 3%-18%). The investigators had pre-determined that a greater than 10% lower rate of cure for cefpodoxime was clinically inferior, so they concluded that cefpodoxime did not meet criteria for noninferiority for achieving clinical cure. The authors speculated that cephalosporins probably fail because fewer women achieved eradication of vaginal E. coli colonization with cefpodoxime than with cirprofloxacin.
The Infectious Diseases Society of America suggested that fluoroquinolones be reserved for important uses other than acute cystitis, and thus should be considered alternative antimicrobials for acute cystitis. Nitrofurantoin could be used, but insurance companies currently are mailing “information letters” about the dangers of using this drug in older patients due to concerns about drug accumulation in people with less than optimal renal function. Fosfomycin could be used, but we have had some trouble finding it in our local pharmacies.
The take-home message is that current studies do not support the use of cefpodoxime as a first-line fluoroquinolone-sparing antimicrobial for acute uncomplicated cystitis. More research needs to be conducted on the efficacy of narrow-spectrum cephalosporins to increase our options. Most importantly, we need to be continually reviewing our local resistance patterns, guideline panel recommendations, and the emerging evidence to intelligently update our disease-specific treatment protocols.
Dr. Ebbert reported having no relevant conflicts of interest.
C. difficile Colitis: A PPI Wake-Up Call
Proton pump inhibitors (PPIs) are one of the most widely prescribed drugs on the planet. PPIs have alleviated the long since forgotten clinical challenge of antacid-resistant and H2-receptor agonist–resistant GERD or dyspepsia. Few of us may continue to “ramp up” GERD or dyspepsia treatment, settling instead on the PPI quick-fix so we can move onto other clinical issues. Based upon claims database data, patients who take prescription PPIs usually stay on therapy for an average of about 6 months. That’s all? Many clinicians have patients who are on these medications for years. With several PPIs now available in the generic form, it has become easier for us to maintain these medications. But what are the harms of this approach?
Last week, the Food and Drug Administration released a report that drew conclusions about data that have been accumulating for some time: PPIs are associated with an increased risk for Clostridium difficile-associated diarrhea.
The FDA reviewed data from the Adverse Event Reporting System (AERS) and the medical literature. The lion’s share of the cases involved older patients with chronic medical conditions or patients who were taking antibiotics. More compelling, perhaps, were the 23 studies demonstrating a higher risk with PPI use compared to no PPI exposure. Notably, this information adds to the labeling information for prescription PPIs, which also lists an increased risk for osteoporosis and bone fracture and hypomagnesemia.
So what does this mean for our practice? I think the onus is on us to assess patient need for PPIs in order to prevent unintended consequences from their long-term use in patients who may not need them. Clinicians should individualize the “need” assessment. This could be done on a trial basis with PPI discontinuation or use of a “step-down” medication such as an H2RA or antacid.
Proton pump inhibitors (PPIs) are one of the most widely prescribed drugs on the planet. PPIs have alleviated the long since forgotten clinical challenge of antacid-resistant and H2-receptor agonist–resistant GERD or dyspepsia. Few of us may continue to “ramp up” GERD or dyspepsia treatment, settling instead on the PPI quick-fix so we can move onto other clinical issues. Based upon claims database data, patients who take prescription PPIs usually stay on therapy for an average of about 6 months. That’s all? Many clinicians have patients who are on these medications for years. With several PPIs now available in the generic form, it has become easier for us to maintain these medications. But what are the harms of this approach?
Last week, the Food and Drug Administration released a report that drew conclusions about data that have been accumulating for some time: PPIs are associated with an increased risk for Clostridium difficile-associated diarrhea.
The FDA reviewed data from the Adverse Event Reporting System (AERS) and the medical literature. The lion’s share of the cases involved older patients with chronic medical conditions or patients who were taking antibiotics. More compelling, perhaps, were the 23 studies demonstrating a higher risk with PPI use compared to no PPI exposure. Notably, this information adds to the labeling information for prescription PPIs, which also lists an increased risk for osteoporosis and bone fracture and hypomagnesemia.
So what does this mean for our practice? I think the onus is on us to assess patient need for PPIs in order to prevent unintended consequences from their long-term use in patients who may not need them. Clinicians should individualize the “need” assessment. This could be done on a trial basis with PPI discontinuation or use of a “step-down” medication such as an H2RA or antacid.
Proton pump inhibitors (PPIs) are one of the most widely prescribed drugs on the planet. PPIs have alleviated the long since forgotten clinical challenge of antacid-resistant and H2-receptor agonist–resistant GERD or dyspepsia. Few of us may continue to “ramp up” GERD or dyspepsia treatment, settling instead on the PPI quick-fix so we can move onto other clinical issues. Based upon claims database data, patients who take prescription PPIs usually stay on therapy for an average of about 6 months. That’s all? Many clinicians have patients who are on these medications for years. With several PPIs now available in the generic form, it has become easier for us to maintain these medications. But what are the harms of this approach?
Last week, the Food and Drug Administration released a report that drew conclusions about data that have been accumulating for some time: PPIs are associated with an increased risk for Clostridium difficile-associated diarrhea.
The FDA reviewed data from the Adverse Event Reporting System (AERS) and the medical literature. The lion’s share of the cases involved older patients with chronic medical conditions or patients who were taking antibiotics. More compelling, perhaps, were the 23 studies demonstrating a higher risk with PPI use compared to no PPI exposure. Notably, this information adds to the labeling information for prescription PPIs, which also lists an increased risk for osteoporosis and bone fracture and hypomagnesemia.
So what does this mean for our practice? I think the onus is on us to assess patient need for PPIs in order to prevent unintended consequences from their long-term use in patients who may not need them. Clinicians should individualize the “need” assessment. This could be done on a trial basis with PPI discontinuation or use of a “step-down” medication such as an H2RA or antacid.
Offer Treatment to All Smokers Regardless of Desire to Quit
No preventable cause of death and disability kills more of our patients than cigarette smoking. Smoking will kill one billion people this century worldwide. Despite the high stakes of this addiction, we as practitioners frequently feel de-energized by the prospect of reviewing treatment options for smokers interested in quitting knowing that relapse is the most likely outcome. Virtually no time is spent on smokers who do not express a desire to quit. When we hear a negative response to the question, “Do you want to quit” we quickly move on to the next task at hand.
In a recently published study, investigators conducted a systematic review to examine the effect of single, minimal (less than 10 minutes long) interventions delivered by physicians to patients who were not selected for their motivation to quit (Addiction 2011 Dec 16.PubMed PMID: 22175545). The findings suggest that offering advice to stop smoking and assistance increases the number of quit attempts and abstinence rates. Offering support for quitting motivates an additional 50% of people to try to quit regardless of their willingness to quit. Offering nicotine replacement therapy increased long-term abstinence rates among this population. The authors conclude that giving advice to stop smoking would increase the number of people stopping smoking by one for every 21 people advised. Offering assistance would increase this by one for every 7 people offered assistance. Once again, this is regardless of the motivation to quit.
So what does this mean for our practice? This evidence suggests that we can change the conversation from starting out with “Are you interested in quitting?” Regardless of a smoker’s interest to quit, we should not ask, but tell them they need to quit to improve their health and offer them nicotine replacement therapy and behavioral counseling (e.g., refer to 1800QUITNOW). Instead of feeling hopeless in the face of a tough addiction, we should feel empowered by the finding that our advice and assistance will increase quit attempts and abstinence rates, regardless of whether patients initially want to quit or not.
Dr. Ebbert is professor of medicine at the Mayo Clinic in Rochester, Minn. He has received grants from the National Institutes of Health and Pfizer to conduct studies of interventions for tobacco use and has served as a consultant to GlaxoSmithKline, manufacturer of nicotine replacement products.
No preventable cause of death and disability kills more of our patients than cigarette smoking. Smoking will kill one billion people this century worldwide. Despite the high stakes of this addiction, we as practitioners frequently feel de-energized by the prospect of reviewing treatment options for smokers interested in quitting knowing that relapse is the most likely outcome. Virtually no time is spent on smokers who do not express a desire to quit. When we hear a negative response to the question, “Do you want to quit” we quickly move on to the next task at hand.
In a recently published study, investigators conducted a systematic review to examine the effect of single, minimal (less than 10 minutes long) interventions delivered by physicians to patients who were not selected for their motivation to quit (Addiction 2011 Dec 16.PubMed PMID: 22175545). The findings suggest that offering advice to stop smoking and assistance increases the number of quit attempts and abstinence rates. Offering support for quitting motivates an additional 50% of people to try to quit regardless of their willingness to quit. Offering nicotine replacement therapy increased long-term abstinence rates among this population. The authors conclude that giving advice to stop smoking would increase the number of people stopping smoking by one for every 21 people advised. Offering assistance would increase this by one for every 7 people offered assistance. Once again, this is regardless of the motivation to quit.
So what does this mean for our practice? This evidence suggests that we can change the conversation from starting out with “Are you interested in quitting?” Regardless of a smoker’s interest to quit, we should not ask, but tell them they need to quit to improve their health and offer them nicotine replacement therapy and behavioral counseling (e.g., refer to 1800QUITNOW). Instead of feeling hopeless in the face of a tough addiction, we should feel empowered by the finding that our advice and assistance will increase quit attempts and abstinence rates, regardless of whether patients initially want to quit or not.
Dr. Ebbert is professor of medicine at the Mayo Clinic in Rochester, Minn. He has received grants from the National Institutes of Health and Pfizer to conduct studies of interventions for tobacco use and has served as a consultant to GlaxoSmithKline, manufacturer of nicotine replacement products.
No preventable cause of death and disability kills more of our patients than cigarette smoking. Smoking will kill one billion people this century worldwide. Despite the high stakes of this addiction, we as practitioners frequently feel de-energized by the prospect of reviewing treatment options for smokers interested in quitting knowing that relapse is the most likely outcome. Virtually no time is spent on smokers who do not express a desire to quit. When we hear a negative response to the question, “Do you want to quit” we quickly move on to the next task at hand.
In a recently published study, investigators conducted a systematic review to examine the effect of single, minimal (less than 10 minutes long) interventions delivered by physicians to patients who were not selected for their motivation to quit (Addiction 2011 Dec 16.PubMed PMID: 22175545). The findings suggest that offering advice to stop smoking and assistance increases the number of quit attempts and abstinence rates. Offering support for quitting motivates an additional 50% of people to try to quit regardless of their willingness to quit. Offering nicotine replacement therapy increased long-term abstinence rates among this population. The authors conclude that giving advice to stop smoking would increase the number of people stopping smoking by one for every 21 people advised. Offering assistance would increase this by one for every 7 people offered assistance. Once again, this is regardless of the motivation to quit.
So what does this mean for our practice? This evidence suggests that we can change the conversation from starting out with “Are you interested in quitting?” Regardless of a smoker’s interest to quit, we should not ask, but tell them they need to quit to improve their health and offer them nicotine replacement therapy and behavioral counseling (e.g., refer to 1800QUITNOW). Instead of feeling hopeless in the face of a tough addiction, we should feel empowered by the finding that our advice and assistance will increase quit attempts and abstinence rates, regardless of whether patients initially want to quit or not.
Dr. Ebbert is professor of medicine at the Mayo Clinic in Rochester, Minn. He has received grants from the National Institutes of Health and Pfizer to conduct studies of interventions for tobacco use and has served as a consultant to GlaxoSmithKline, manufacturer of nicotine replacement products.
Bone Mineral Density: Methinks Thou Dost Test Too Much
Screening for osteoporosis by assessing bone mineral density (BMD) has become an integral part of the care we provide to our older patients. Medicare pays for dual-energy x-ray absorptiometry (DXA) screening every two years or more frequently if the procedure is determined to be medically necessary. Based on my discussions with colleagues, however, striking heterogeneity clearly exists in the frequency with which we obtain BMDs. How frequently do we really need to screen?
In a recent report in the New England Journal of Medicine, the Study of Osteoporotic Fractures Research Group provides clinicians with refreshingly helpful data to inform our practices (N Engl J Med. 2012;366:225-33). In a cohort of 4,957 women two transitions were evaluated: 1) from normal BMD to osteoporosis; and 2) from osteopenia to osteoporosis. The BMD “testing interval” was defined as the estimated time for 10% of women to make the transition to osteoporosis before having a hip or clinical vertebral fracture, with adjustments for estrogen use and clinical factors. Participants were stratified into normal BMD (T score, −1.00 or higher), mild osteopenia (T score −1.01 to −1.49), moderate osteopenia (T score −1.50 to −1.99), and advanced osteopenia (T score −2.00 to −2.49). The estimated BMD testing interval was 16.8 years (95% confidence interval [CI] 11.5 to 24.6) for women with normal BMD, 17.3 years (95% CI 13.9 to 21.5) for women with mild osteopenia, 4.7 years (95% CI, 4.2 to 5.2) for women with moderate osteopenia, and 1.1 years (95% CI, 1.0 to 1.3) for women with advanced osteopenia. Within a given T-score, the transition from osteopenia to osteoporosis was longer for women of younger age, higher BMI, and estrogen use. Table 3 provides an excellent reference tool for tailoring screening intervals for our patients. When the testing interval was redefined for 20% of women to make the transition from osteopenia to osteoporosis, the time estimates were 80% longer.
These findings provide some rationale, evidence-based guidelines upon which we can base our testing intervals. This will help us do our part to avoid breaking the bank in the interest of avoiding breaking a bone.
Dr. Ebbert reported having no relevant conflicts of interest.
Screening for osteoporosis by assessing bone mineral density (BMD) has become an integral part of the care we provide to our older patients. Medicare pays for dual-energy x-ray absorptiometry (DXA) screening every two years or more frequently if the procedure is determined to be medically necessary. Based on my discussions with colleagues, however, striking heterogeneity clearly exists in the frequency with which we obtain BMDs. How frequently do we really need to screen?
In a recent report in the New England Journal of Medicine, the Study of Osteoporotic Fractures Research Group provides clinicians with refreshingly helpful data to inform our practices (N Engl J Med. 2012;366:225-33). In a cohort of 4,957 women two transitions were evaluated: 1) from normal BMD to osteoporosis; and 2) from osteopenia to osteoporosis. The BMD “testing interval” was defined as the estimated time for 10% of women to make the transition to osteoporosis before having a hip or clinical vertebral fracture, with adjustments for estrogen use and clinical factors. Participants were stratified into normal BMD (T score, −1.00 or higher), mild osteopenia (T score −1.01 to −1.49), moderate osteopenia (T score −1.50 to −1.99), and advanced osteopenia (T score −2.00 to −2.49). The estimated BMD testing interval was 16.8 years (95% confidence interval [CI] 11.5 to 24.6) for women with normal BMD, 17.3 years (95% CI 13.9 to 21.5) for women with mild osteopenia, 4.7 years (95% CI, 4.2 to 5.2) for women with moderate osteopenia, and 1.1 years (95% CI, 1.0 to 1.3) for women with advanced osteopenia. Within a given T-score, the transition from osteopenia to osteoporosis was longer for women of younger age, higher BMI, and estrogen use. Table 3 provides an excellent reference tool for tailoring screening intervals for our patients. When the testing interval was redefined for 20% of women to make the transition from osteopenia to osteoporosis, the time estimates were 80% longer.
These findings provide some rationale, evidence-based guidelines upon which we can base our testing intervals. This will help us do our part to avoid breaking the bank in the interest of avoiding breaking a bone.
Dr. Ebbert reported having no relevant conflicts of interest.
Screening for osteoporosis by assessing bone mineral density (BMD) has become an integral part of the care we provide to our older patients. Medicare pays for dual-energy x-ray absorptiometry (DXA) screening every two years or more frequently if the procedure is determined to be medically necessary. Based on my discussions with colleagues, however, striking heterogeneity clearly exists in the frequency with which we obtain BMDs. How frequently do we really need to screen?
In a recent report in the New England Journal of Medicine, the Study of Osteoporotic Fractures Research Group provides clinicians with refreshingly helpful data to inform our practices (N Engl J Med. 2012;366:225-33). In a cohort of 4,957 women two transitions were evaluated: 1) from normal BMD to osteoporosis; and 2) from osteopenia to osteoporosis. The BMD “testing interval” was defined as the estimated time for 10% of women to make the transition to osteoporosis before having a hip or clinical vertebral fracture, with adjustments for estrogen use and clinical factors. Participants were stratified into normal BMD (T score, −1.00 or higher), mild osteopenia (T score −1.01 to −1.49), moderate osteopenia (T score −1.50 to −1.99), and advanced osteopenia (T score −2.00 to −2.49). The estimated BMD testing interval was 16.8 years (95% confidence interval [CI] 11.5 to 24.6) for women with normal BMD, 17.3 years (95% CI 13.9 to 21.5) for women with mild osteopenia, 4.7 years (95% CI, 4.2 to 5.2) for women with moderate osteopenia, and 1.1 years (95% CI, 1.0 to 1.3) for women with advanced osteopenia. Within a given T-score, the transition from osteopenia to osteoporosis was longer for women of younger age, higher BMI, and estrogen use. Table 3 provides an excellent reference tool for tailoring screening intervals for our patients. When the testing interval was redefined for 20% of women to make the transition from osteopenia to osteoporosis, the time estimates were 80% longer.
These findings provide some rationale, evidence-based guidelines upon which we can base our testing intervals. This will help us do our part to avoid breaking the bank in the interest of avoiding breaking a bone.
Dr. Ebbert reported having no relevant conflicts of interest.
Preventing Gout With Anti-Hypertensives: Selecting the Right Agent
Most of us are familiar with the relationship between diuretics and increased uric acid levels potentially precipitating attacks of gout. Perhaps fewer recall that beta blockers also increase uric acid levels. In contrast, calcium channel blockers (CCBs) and losartan have been observed to decrease serum uric acid levels. This begs the question: Do CCBs and losartan reduce the incidence of gout attacks? Unfortunately, no randomized trials have been conducted to evaluate this.
However, a very large case-control study was published recently (BMJ. 2012;344:d8190) exploring the benefit of these drug classes for preventing gout attacks. Using an impressive computerized medical record database including 4 million patients entered by U.K. general practitioners, investigators conducted a case-control study of adults, aged 20 to 89 years, to determine the relationship between antihypertensives and gout. Incident cases of gout (n = 24,768) were ascertained in individuals having a gout attack after an initial diagnosis and were matched with 50,000 controls. The antihypertensives evaluated included diuretics, beta blockers, CCBs, ACE-inhibitors, losartan, and non-losartan angiotensin receptor blockers (ARBs). Results were adjusted for covariates. Hypertension itself was associated with a higher relative risk for incident gout (RR 1.75; 95% CI: 1.69-1.82). CCBs were associated with a lower risk of incident gout (RR 0.87; 95% CI: 0.82-0.93). Increasing duration and dose of medication further reduced the risk. Losartan also was associated with a reduced risk of developing gout (RR 0.81; 95% CI: 0.70 to 0.94). In contrast, current use of diuretics, beta blockers, ACE-inhibitors, and non-losartan ARBs among those with hypertension were all associated with an increased risk of incident gout. Findings were similar among patients without hypertension and among those on anti-gout medications.
Since the majority of patients with gout have hypertension, most of these patients will eventually be taking an antihypertensive. Data from this study would suggest that the selection of antihypertensive is an important consideration for avoiding future gout attacks. Similarly to how we might select an ACE-I/ARB for the treatment of hypertension in patients with diabetes for renoprotection, we should consider selecting losartan and CCB’s in patients with gout to prevent future attacks.
Dr. Ebbert reported having no relevant conflicts of interest.
Most of us are familiar with the relationship between diuretics and increased uric acid levels potentially precipitating attacks of gout. Perhaps fewer recall that beta blockers also increase uric acid levels. In contrast, calcium channel blockers (CCBs) and losartan have been observed to decrease serum uric acid levels. This begs the question: Do CCBs and losartan reduce the incidence of gout attacks? Unfortunately, no randomized trials have been conducted to evaluate this.
However, a very large case-control study was published recently (BMJ. 2012;344:d8190) exploring the benefit of these drug classes for preventing gout attacks. Using an impressive computerized medical record database including 4 million patients entered by U.K. general practitioners, investigators conducted a case-control study of adults, aged 20 to 89 years, to determine the relationship between antihypertensives and gout. Incident cases of gout (n = 24,768) were ascertained in individuals having a gout attack after an initial diagnosis and were matched with 50,000 controls. The antihypertensives evaluated included diuretics, beta blockers, CCBs, ACE-inhibitors, losartan, and non-losartan angiotensin receptor blockers (ARBs). Results were adjusted for covariates. Hypertension itself was associated with a higher relative risk for incident gout (RR 1.75; 95% CI: 1.69-1.82). CCBs were associated with a lower risk of incident gout (RR 0.87; 95% CI: 0.82-0.93). Increasing duration and dose of medication further reduced the risk. Losartan also was associated with a reduced risk of developing gout (RR 0.81; 95% CI: 0.70 to 0.94). In contrast, current use of diuretics, beta blockers, ACE-inhibitors, and non-losartan ARBs among those with hypertension were all associated with an increased risk of incident gout. Findings were similar among patients without hypertension and among those on anti-gout medications.
Since the majority of patients with gout have hypertension, most of these patients will eventually be taking an antihypertensive. Data from this study would suggest that the selection of antihypertensive is an important consideration for avoiding future gout attacks. Similarly to how we might select an ACE-I/ARB for the treatment of hypertension in patients with diabetes for renoprotection, we should consider selecting losartan and CCB’s in patients with gout to prevent future attacks.
Dr. Ebbert reported having no relevant conflicts of interest.
Most of us are familiar with the relationship between diuretics and increased uric acid levels potentially precipitating attacks of gout. Perhaps fewer recall that beta blockers also increase uric acid levels. In contrast, calcium channel blockers (CCBs) and losartan have been observed to decrease serum uric acid levels. This begs the question: Do CCBs and losartan reduce the incidence of gout attacks? Unfortunately, no randomized trials have been conducted to evaluate this.
However, a very large case-control study was published recently (BMJ. 2012;344:d8190) exploring the benefit of these drug classes for preventing gout attacks. Using an impressive computerized medical record database including 4 million patients entered by U.K. general practitioners, investigators conducted a case-control study of adults, aged 20 to 89 years, to determine the relationship between antihypertensives and gout. Incident cases of gout (n = 24,768) were ascertained in individuals having a gout attack after an initial diagnosis and were matched with 50,000 controls. The antihypertensives evaluated included diuretics, beta blockers, CCBs, ACE-inhibitors, losartan, and non-losartan angiotensin receptor blockers (ARBs). Results were adjusted for covariates. Hypertension itself was associated with a higher relative risk for incident gout (RR 1.75; 95% CI: 1.69-1.82). CCBs were associated with a lower risk of incident gout (RR 0.87; 95% CI: 0.82-0.93). Increasing duration and dose of medication further reduced the risk. Losartan also was associated with a reduced risk of developing gout (RR 0.81; 95% CI: 0.70 to 0.94). In contrast, current use of diuretics, beta blockers, ACE-inhibitors, and non-losartan ARBs among those with hypertension were all associated with an increased risk of incident gout. Findings were similar among patients without hypertension and among those on anti-gout medications.
Since the majority of patients with gout have hypertension, most of these patients will eventually be taking an antihypertensive. Data from this study would suggest that the selection of antihypertensive is an important consideration for avoiding future gout attacks. Similarly to how we might select an ACE-I/ARB for the treatment of hypertension in patients with diabetes for renoprotection, we should consider selecting losartan and CCB’s in patients with gout to prevent future attacks.
Dr. Ebbert reported having no relevant conflicts of interest.
Aspirin for Primary Prevention: What Should We Tell Patients?
There’s little controversy surrounding the use of aspirin for the prevention of cardiovascular events in patients with established disease. By contrast, there’s far more smoke than light regarding the issue of aspirin for primary prevention. We all commonly face the proverbial “handle-on-the-doorknob” question, “Should I be taking aspirin?” from patients without established cardiovascular disease. Such interactions tax us to recall our most recently-attended CME meeting to try and remember what the general mood of clinicians was on this issue.
A recent large meta-analysis (Arch Intern Med.2012;Jan 9 [doi:10.1001/archinternmed.2011.626]) will inform these conversations in the year ahead. In this study, investigators conducted a meta-analysis of studies that had at least one year of follow-up and at least 1,000 participants. Nine trials involving a total of more than 100,000 patients were included. Aspirin was associated with a reduction in cardiovascular (CVD) events attributable primarily to the nonfatal MI. No effect of aspirin was observed on fatal MI, stroke, CVD death or cancer mortality. A 70% increase in bleeding events was observed with a 30% increase in the risk of clinically significant bleeding events, defined as fatal bleeding from any site, cerebrovascular or retinal bleeding, bleeding from hollow viscus, bleeding requiring hospitalization and/or transfusion, or study-defined major bleeding regardless of source. The number needed to treat for nonfatal MI was 162 compared to the number needed to harm for nontrivial bleed of 73.
The data suggest that for every two patients to whom we recommend aspirin to prevent a nonfatal MI we will have two clinically significant bleeding events. Perhaps the authors are correct in suggesting that aspirin may add little extra value to CVD risk reduction efforts that aggressively target lipid levels, blood pressure, and tobacco use behaviors. But we can’t ignore the aspirin question because patients want to know what to do and, commonly, the justification for it. In patients at low risk for CVD events, aspirin prophylaxis may make them bleed. But let us not throw the baby out with the bathwater because among higher risk patients, the data are strong that aspirin may prevent a heart attack and some patients may place high value on this prevention. For those in the middle, perhaps creative aspirin dosing, such as every-other-day, could be explored.
There’s little controversy surrounding the use of aspirin for the prevention of cardiovascular events in patients with established disease. By contrast, there’s far more smoke than light regarding the issue of aspirin for primary prevention. We all commonly face the proverbial “handle-on-the-doorknob” question, “Should I be taking aspirin?” from patients without established cardiovascular disease. Such interactions tax us to recall our most recently-attended CME meeting to try and remember what the general mood of clinicians was on this issue.
A recent large meta-analysis (Arch Intern Med.2012;Jan 9 [doi:10.1001/archinternmed.2011.626]) will inform these conversations in the year ahead. In this study, investigators conducted a meta-analysis of studies that had at least one year of follow-up and at least 1,000 participants. Nine trials involving a total of more than 100,000 patients were included. Aspirin was associated with a reduction in cardiovascular (CVD) events attributable primarily to the nonfatal MI. No effect of aspirin was observed on fatal MI, stroke, CVD death or cancer mortality. A 70% increase in bleeding events was observed with a 30% increase in the risk of clinically significant bleeding events, defined as fatal bleeding from any site, cerebrovascular or retinal bleeding, bleeding from hollow viscus, bleeding requiring hospitalization and/or transfusion, or study-defined major bleeding regardless of source. The number needed to treat for nonfatal MI was 162 compared to the number needed to harm for nontrivial bleed of 73.
The data suggest that for every two patients to whom we recommend aspirin to prevent a nonfatal MI we will have two clinically significant bleeding events. Perhaps the authors are correct in suggesting that aspirin may add little extra value to CVD risk reduction efforts that aggressively target lipid levels, blood pressure, and tobacco use behaviors. But we can’t ignore the aspirin question because patients want to know what to do and, commonly, the justification for it. In patients at low risk for CVD events, aspirin prophylaxis may make them bleed. But let us not throw the baby out with the bathwater because among higher risk patients, the data are strong that aspirin may prevent a heart attack and some patients may place high value on this prevention. For those in the middle, perhaps creative aspirin dosing, such as every-other-day, could be explored.
There’s little controversy surrounding the use of aspirin for the prevention of cardiovascular events in patients with established disease. By contrast, there’s far more smoke than light regarding the issue of aspirin for primary prevention. We all commonly face the proverbial “handle-on-the-doorknob” question, “Should I be taking aspirin?” from patients without established cardiovascular disease. Such interactions tax us to recall our most recently-attended CME meeting to try and remember what the general mood of clinicians was on this issue.
A recent large meta-analysis (Arch Intern Med.2012;Jan 9 [doi:10.1001/archinternmed.2011.626]) will inform these conversations in the year ahead. In this study, investigators conducted a meta-analysis of studies that had at least one year of follow-up and at least 1,000 participants. Nine trials involving a total of more than 100,000 patients were included. Aspirin was associated with a reduction in cardiovascular (CVD) events attributable primarily to the nonfatal MI. No effect of aspirin was observed on fatal MI, stroke, CVD death or cancer mortality. A 70% increase in bleeding events was observed with a 30% increase in the risk of clinically significant bleeding events, defined as fatal bleeding from any site, cerebrovascular or retinal bleeding, bleeding from hollow viscus, bleeding requiring hospitalization and/or transfusion, or study-defined major bleeding regardless of source. The number needed to treat for nonfatal MI was 162 compared to the number needed to harm for nontrivial bleed of 73.
The data suggest that for every two patients to whom we recommend aspirin to prevent a nonfatal MI we will have two clinically significant bleeding events. Perhaps the authors are correct in suggesting that aspirin may add little extra value to CVD risk reduction efforts that aggressively target lipid levels, blood pressure, and tobacco use behaviors. But we can’t ignore the aspirin question because patients want to know what to do and, commonly, the justification for it. In patients at low risk for CVD events, aspirin prophylaxis may make them bleed. But let us not throw the baby out with the bathwater because among higher risk patients, the data are strong that aspirin may prevent a heart attack and some patients may place high value on this prevention. For those in the middle, perhaps creative aspirin dosing, such as every-other-day, could be explored.
Vitamin D Supplementation: Good for Everybody?
Vitamin supplementation for health benefits is widespread among patients in our clinical practices. Data emerging from randomized trials have indicated that vitamin supplementation may not be beneficial in some cases (e.g., vitamin E for stroke), and harmful in others (e.g. beta-carotene and vitamin A for lung cancer prevention in smokers). Vitamin D is the most recent darling of vitamin fever, and the evidence has reliably been pointing to a positive effect on health. The salutary effects of vitamin D on cardiovascular disease are likely related to its ability to decrease inflammation. But should all patients be taking vitamin D?
Evidence published recently in the American Journal of Cardiology (Am J Cardiol. 2011 Oct 12. Epub ahead of print) suggests that vitamin D may not turn out to be a panacea for everybody. In this study, investigators evaluated data from the National Health and Nutrition Examination Survey (NHANES), an ongoing sample assessing the nutritional status of U.S.-based adults. The NHANES collects surveys and serum samples, and the current study evaluated the relationship between vitamin D [25-hydroxyvitamin D = 25(OH)D] and C-reactive protein (an inflammatory marker) blood concentrations from 15,167 individuals. The median serum concentration of vitamin D was 21 ng/mL. Below this median serum level, an inverse relationship was observed between vitamin D and CRP levels. Above the median serum vitamin D level, a direct relationship was observed between vitamin D and CRP levels. The authors concluded that vitamin D levels above the population median might be pro-inflammatory.
More research needs to be conducted to quantify the risks incurred by supplementing vitamin D in patients with normal concentrations of vitamin D. At the very least, this study may warrant having discussions with our patients about taking what we would consider high doses of vitamin D. Obtaining serum concentrations of vitamin D is a common clinical practice and might be a good place to start these discussions. Among patients who, for example, have osteoporosis and cardiovascular disease, we may need to make tradeoffs between the risks and benefits of vitamin D supplementation. For now, we should balance these risks by encouraging our patients implement “everything in moderation.”
Vitamin supplementation for health benefits is widespread among patients in our clinical practices. Data emerging from randomized trials have indicated that vitamin supplementation may not be beneficial in some cases (e.g., vitamin E for stroke), and harmful in others (e.g. beta-carotene and vitamin A for lung cancer prevention in smokers). Vitamin D is the most recent darling of vitamin fever, and the evidence has reliably been pointing to a positive effect on health. The salutary effects of vitamin D on cardiovascular disease are likely related to its ability to decrease inflammation. But should all patients be taking vitamin D?
Evidence published recently in the American Journal of Cardiology (Am J Cardiol. 2011 Oct 12. Epub ahead of print) suggests that vitamin D may not turn out to be a panacea for everybody. In this study, investigators evaluated data from the National Health and Nutrition Examination Survey (NHANES), an ongoing sample assessing the nutritional status of U.S.-based adults. The NHANES collects surveys and serum samples, and the current study evaluated the relationship between vitamin D [25-hydroxyvitamin D = 25(OH)D] and C-reactive protein (an inflammatory marker) blood concentrations from 15,167 individuals. The median serum concentration of vitamin D was 21 ng/mL. Below this median serum level, an inverse relationship was observed between vitamin D and CRP levels. Above the median serum vitamin D level, a direct relationship was observed between vitamin D and CRP levels. The authors concluded that vitamin D levels above the population median might be pro-inflammatory.
More research needs to be conducted to quantify the risks incurred by supplementing vitamin D in patients with normal concentrations of vitamin D. At the very least, this study may warrant having discussions with our patients about taking what we would consider high doses of vitamin D. Obtaining serum concentrations of vitamin D is a common clinical practice and might be a good place to start these discussions. Among patients who, for example, have osteoporosis and cardiovascular disease, we may need to make tradeoffs between the risks and benefits of vitamin D supplementation. For now, we should balance these risks by encouraging our patients implement “everything in moderation.”
Vitamin supplementation for health benefits is widespread among patients in our clinical practices. Data emerging from randomized trials have indicated that vitamin supplementation may not be beneficial in some cases (e.g., vitamin E for stroke), and harmful in others (e.g. beta-carotene and vitamin A for lung cancer prevention in smokers). Vitamin D is the most recent darling of vitamin fever, and the evidence has reliably been pointing to a positive effect on health. The salutary effects of vitamin D on cardiovascular disease are likely related to its ability to decrease inflammation. But should all patients be taking vitamin D?
Evidence published recently in the American Journal of Cardiology (Am J Cardiol. 2011 Oct 12. Epub ahead of print) suggests that vitamin D may not turn out to be a panacea for everybody. In this study, investigators evaluated data from the National Health and Nutrition Examination Survey (NHANES), an ongoing sample assessing the nutritional status of U.S.-based adults. The NHANES collects surveys and serum samples, and the current study evaluated the relationship between vitamin D [25-hydroxyvitamin D = 25(OH)D] and C-reactive protein (an inflammatory marker) blood concentrations from 15,167 individuals. The median serum concentration of vitamin D was 21 ng/mL. Below this median serum level, an inverse relationship was observed between vitamin D and CRP levels. Above the median serum vitamin D level, a direct relationship was observed between vitamin D and CRP levels. The authors concluded that vitamin D levels above the population median might be pro-inflammatory.
More research needs to be conducted to quantify the risks incurred by supplementing vitamin D in patients with normal concentrations of vitamin D. At the very least, this study may warrant having discussions with our patients about taking what we would consider high doses of vitamin D. Obtaining serum concentrations of vitamin D is a common clinical practice and might be a good place to start these discussions. Among patients who, for example, have osteoporosis and cardiovascular disease, we may need to make tradeoffs between the risks and benefits of vitamin D supplementation. For now, we should balance these risks by encouraging our patients implement “everything in moderation.”
Low HDL: Should We Be Chasing it?
Editor’s Note
Welcome to “What Matters,” a blog about the clinical research that is most likely to affect your practice, patient outcomes, and your bottom line. We know you have too much to read; there are simply too many studies and often too much buzz generated about them to really make sense of it all in a practical, systematic way. Dr. Jon O. Ebbert, professor of medicine at the Mayo Clinic in Rochester, Minn., offers an authoritative view on recent key clinical developments and what they may mean for you. Follow him every week to stay informed and on top of your world.
Elevated LDL cholesterol is an independent risk factor for cardiovascular disease (CVD) and the evidence supporting treatment to lower LDL is clear. Low HDL also is an independent risk factor for CVD. However, guidance is mixed and practice heterogeneous on how clinicians should address low HDL when LDL goals are achieved with statin therapy.
The AIM-HIGH trial recently published evidence on the effect of extended-release niacin among patients with established CVD with low baseline HDL (less than 40 mg/dL) (N. Engl. J. Med. 2011;365:2255-67). Subjects in the experimental group received niacin 1,500-2,000 mg per day plus simvastatin while subjects in the placebo group received a matching subtherapeutic control (50 mg immediate-release niacin) in addition to simvastatin to maintain blinding. Doses of simvastatin were adjusted to achieve a target LDL of 40-80 mg/dL. Subjects could receive ezetimibe to achieve target LDL cholesterol as needed. A very large sample size of 3,414 subjects was randomized. At 2 years, the mean HDL level had increased by 25% to 42 mg/dL in the niacin group and by about 10% to 38 mg/dL in the placebo group. Triglycerides also improved more in the niacin group. Study drug was discontinued in 25% of patients in the niacin group and 20% in the placebo group. No differences were observed in the first event of the composite of death from CVD, nonfatal myocardial infarction, ischemic stroke, hospitalization for acute coronary syndrome, or symptom-driven coronary or cerebral revascularization.
Many clinicians treat HDL as a “secondary target” for reduction of CVD events when LDL goals have already been achieved. Data from this study would suggest that exposing patients to additional risk and cost in an attempt to raise HDL in this setting is not warranted. In light of present data, we are once again reminded to “treat the patient, not a number.”
Editor’s Note
Welcome to “What Matters,” a blog about the clinical research that is most likely to affect your practice, patient outcomes, and your bottom line. We know you have too much to read; there are simply too many studies and often too much buzz generated about them to really make sense of it all in a practical, systematic way. Dr. Jon O. Ebbert, professor of medicine at the Mayo Clinic in Rochester, Minn., offers an authoritative view on recent key clinical developments and what they may mean for you. Follow him every week to stay informed and on top of your world.
Elevated LDL cholesterol is an independent risk factor for cardiovascular disease (CVD) and the evidence supporting treatment to lower LDL is clear. Low HDL also is an independent risk factor for CVD. However, guidance is mixed and practice heterogeneous on how clinicians should address low HDL when LDL goals are achieved with statin therapy.
The AIM-HIGH trial recently published evidence on the effect of extended-release niacin among patients with established CVD with low baseline HDL (less than 40 mg/dL) (N. Engl. J. Med. 2011;365:2255-67). Subjects in the experimental group received niacin 1,500-2,000 mg per day plus simvastatin while subjects in the placebo group received a matching subtherapeutic control (50 mg immediate-release niacin) in addition to simvastatin to maintain blinding. Doses of simvastatin were adjusted to achieve a target LDL of 40-80 mg/dL. Subjects could receive ezetimibe to achieve target LDL cholesterol as needed. A very large sample size of 3,414 subjects was randomized. At 2 years, the mean HDL level had increased by 25% to 42 mg/dL in the niacin group and by about 10% to 38 mg/dL in the placebo group. Triglycerides also improved more in the niacin group. Study drug was discontinued in 25% of patients in the niacin group and 20% in the placebo group. No differences were observed in the first event of the composite of death from CVD, nonfatal myocardial infarction, ischemic stroke, hospitalization for acute coronary syndrome, or symptom-driven coronary or cerebral revascularization.
Many clinicians treat HDL as a “secondary target” for reduction of CVD events when LDL goals have already been achieved. Data from this study would suggest that exposing patients to additional risk and cost in an attempt to raise HDL in this setting is not warranted. In light of present data, we are once again reminded to “treat the patient, not a number.”
Editor’s Note
Welcome to “What Matters,” a blog about the clinical research that is most likely to affect your practice, patient outcomes, and your bottom line. We know you have too much to read; there are simply too many studies and often too much buzz generated about them to really make sense of it all in a practical, systematic way. Dr. Jon O. Ebbert, professor of medicine at the Mayo Clinic in Rochester, Minn., offers an authoritative view on recent key clinical developments and what they may mean for you. Follow him every week to stay informed and on top of your world.
Elevated LDL cholesterol is an independent risk factor for cardiovascular disease (CVD) and the evidence supporting treatment to lower LDL is clear. Low HDL also is an independent risk factor for CVD. However, guidance is mixed and practice heterogeneous on how clinicians should address low HDL when LDL goals are achieved with statin therapy.
The AIM-HIGH trial recently published evidence on the effect of extended-release niacin among patients with established CVD with low baseline HDL (less than 40 mg/dL) (N. Engl. J. Med. 2011;365:2255-67). Subjects in the experimental group received niacin 1,500-2,000 mg per day plus simvastatin while subjects in the placebo group received a matching subtherapeutic control (50 mg immediate-release niacin) in addition to simvastatin to maintain blinding. Doses of simvastatin were adjusted to achieve a target LDL of 40-80 mg/dL. Subjects could receive ezetimibe to achieve target LDL cholesterol as needed. A very large sample size of 3,414 subjects was randomized. At 2 years, the mean HDL level had increased by 25% to 42 mg/dL in the niacin group and by about 10% to 38 mg/dL in the placebo group. Triglycerides also improved more in the niacin group. Study drug was discontinued in 25% of patients in the niacin group and 20% in the placebo group. No differences were observed in the first event of the composite of death from CVD, nonfatal myocardial infarction, ischemic stroke, hospitalization for acute coronary syndrome, or symptom-driven coronary or cerebral revascularization.
Many clinicians treat HDL as a “secondary target” for reduction of CVD events when LDL goals have already been achieved. Data from this study would suggest that exposing patients to additional risk and cost in an attempt to raise HDL in this setting is not warranted. In light of present data, we are once again reminded to “treat the patient, not a number.”
GAS Isn't Always Strep Throat : ID Consult
For your viewing pleasure …” as Rod Serling once said, I invite you to peruse four alternative group A streptococcus presentations that might not be so obvious at first and for which the approach may be controversial:
▸ Urticaria. Hives may be due to group A streptococcus (GAS), developing even while the patient is on effective anti-GAS treatment. This appears to be an atypical host-specific response and often occurs in children who develop hives in response to other stimuli as well. Unfortunately, the literature on this is mostly anecdotal.
We recently saw a 15-year-old who developed hives 3 days into amoxicillin therapy for GAS pharyngitis. Amoxicillin was changed to azithromycin, but the urticaria intensified. After switching her to two other classes of antibiotics, we deduced that the urticaria wasn't a drug reaction, but a reaction to GAS itself.
Another scenario is the child with recurrent urticaria. Elevated antistreptolysin-O (ASO) or anti-DNase B titers or evidence of GAS in the pharynx via a rapid antigen test or throat culture are indications to try empiric GAS therapy. If the urticaria goes away and stays away, you've solved the problem.
Look for GAS if hives occur more than three times in 6 months without another known trigger, even if the child has no signs of pharyngitis. If it looks like GAS is involved, consider 3 months of prophylactic penicillin in these select patients, particularly during the winter months, when reexposure is most likely.
▸ Movement disorders. It's not very common, but if a child suddenly develops tics or obsessive-compulsive behaviors, check for GAS.
Pediatric autoimmune neuropsychiatric disorders associated with streptococcal infection (PANDAS) was first described in the 1990's by Susan Swedo, M.D., and her colleagues at the National Institute of Mental Health based on five criteria: presence of obsessive-compulsive disorder (OCD) and/or a tic disorder; prepubertal symptom onset; episodic symptom severity; GAS association; and associated neurologic abnormalities (Am. J. Psychiatry 1998;155:264-71).
We recently saw a child with a sudden onset of tics after a febrile illness. Rheumatic fever was considered and the anti-DNase B was elevated. He did not meet the familiar modified Jones criteria. His repetitive hand movements were not really chorea and he had facial tics as well. After 10 days of penicillin, his tics went away, but some unusual facial movements remained for another month.
Six weeks after stopping penicillin, he developed OCD symptoms, which in turn disappeared after 6 weeks of amoxicillin prophylaxis. He continues symptom free on amoxicillin.
One wonders if he might have been reexposed to streptococcus after the initial penicillin; and, while he didn't subsequently develop clinical pharyngitis, GAS reacquisition may have triggered an antibody response that cross-reacted with neural tissues.
The theory that if you can prevent GAS stimulus, you may prevent neuropsychiatric symptoms is supported by a prospective study published in 2002 by Marie Lynd Murphy, M.D., and Michael Pichichero, M.D. (Arch. Pediatr. Adolesc. Med. 2002;156:356-61).
Both the diagnosis and empiric treatment of PANDAS are still controversial. It seems to me that an antibiotic trial could be justified in the face of symptoms that are quite lifestyle altering for the child and family—even if only some of the small subset with evidence of GAS improve.
However, I'm not yet ready to give intravenous immunoglobulin or order plasmapheresis without a defined investigational protocol.
▸ Fever and petechiae. We immediately think of meningococcemia in a child with fever and petechiae (and so we should), even though GAS is actually more likely. Ray Baker, M.D., and his colleagues found Neisseria meningitides in 13 (6.8%) of 190 children with fever and petechial rash (8/13 had meningitis), compared with GAS in 10%. No pathogen was found in 72% (Pediatrics 1989;84:1051-5).
Using these data can be tricky. I think we should consider GAS in relatively well-looking febrile children with only a few scattered petechiae and tonsillitis or pharyngitis. If a throat culture or rapid antigen test is positive, immediate hospitalization may not be necessary. Of course, hospitalization and full work-up are necessary if the child looks sick, has more than scattered petechiae or any purpura, or if meningococcus has been in the community lately.
The main clinical use of these data may be to obtain a throat culture before starting antibiotics for presumed meningococcus in the fever/petechiae case.
▸Joint pain and fever. It seems that we are seeing more children who have fever, arthralgias and elevated sedimentation rates and C-reactive protein values, but who don't meet the Jones criteria for rheumatic fever. That doesn't mean they don't have poststreptococcal disease.
The Jones criteria for rheumatic fever, first established in 1944 and revised most recently in 1992 (JAMA 1992;268:2069-73), require evidence of antecedent GAS infection along with either two or more major criteria (carditis, polyarthritis, chorea, erythema marginatum, subcutaneous nodules), or one major criterion plus at least two minor criteria (fever, arthralgia, previous rheumatic fever or rheumatic heart disease, elevated acute phase reactants, prolonged PR interval).
This definition leaves us with a conundrum: what to do with the child who has two or more of the minor criteria but none of the major ones, particularly if the child has a single joint arthritis. These may be post-GAS syndromes. Or could the child have some other arthritis that coincidentally occurred following GAS?
Further, do these children need more than 10 days of penicillin (up to a year)? Without prophylaxis, some who initially had an autoimmune joint flare-up without classic carditis or polyarthritis may convert to full-blown rheumatic fever the next time they're exposed to GAS.
It seems reasonable to put such children on prophylaxis for 12 months, especially during the winter GAS season. If the joint symptoms recur on adequate GAS prophylaxis, you can be more confident that it's not due to GAS and therefore should be referred to a rheumatologist. If the child develops some evidence of valvular abnormality over the year of prophylaxis, then it's an atypical case of rheumatic fever.
For your viewing pleasure …” as Rod Serling once said, I invite you to peruse four alternative group A streptococcus presentations that might not be so obvious at first and for which the approach may be controversial:
▸ Urticaria. Hives may be due to group A streptococcus (GAS), developing even while the patient is on effective anti-GAS treatment. This appears to be an atypical host-specific response and often occurs in children who develop hives in response to other stimuli as well. Unfortunately, the literature on this is mostly anecdotal.
We recently saw a 15-year-old who developed hives 3 days into amoxicillin therapy for GAS pharyngitis. Amoxicillin was changed to azithromycin, but the urticaria intensified. After switching her to two other classes of antibiotics, we deduced that the urticaria wasn't a drug reaction, but a reaction to GAS itself.
Another scenario is the child with recurrent urticaria. Elevated antistreptolysin-O (ASO) or anti-DNase B titers or evidence of GAS in the pharynx via a rapid antigen test or throat culture are indications to try empiric GAS therapy. If the urticaria goes away and stays away, you've solved the problem.
Look for GAS if hives occur more than three times in 6 months without another known trigger, even if the child has no signs of pharyngitis. If it looks like GAS is involved, consider 3 months of prophylactic penicillin in these select patients, particularly during the winter months, when reexposure is most likely.
▸ Movement disorders. It's not very common, but if a child suddenly develops tics or obsessive-compulsive behaviors, check for GAS.
Pediatric autoimmune neuropsychiatric disorders associated with streptococcal infection (PANDAS) was first described in the 1990's by Susan Swedo, M.D., and her colleagues at the National Institute of Mental Health based on five criteria: presence of obsessive-compulsive disorder (OCD) and/or a tic disorder; prepubertal symptom onset; episodic symptom severity; GAS association; and associated neurologic abnormalities (Am. J. Psychiatry 1998;155:264-71).
We recently saw a child with a sudden onset of tics after a febrile illness. Rheumatic fever was considered and the anti-DNase B was elevated. He did not meet the familiar modified Jones criteria. His repetitive hand movements were not really chorea and he had facial tics as well. After 10 days of penicillin, his tics went away, but some unusual facial movements remained for another month.
Six weeks after stopping penicillin, he developed OCD symptoms, which in turn disappeared after 6 weeks of amoxicillin prophylaxis. He continues symptom free on amoxicillin.
One wonders if he might have been reexposed to streptococcus after the initial penicillin; and, while he didn't subsequently develop clinical pharyngitis, GAS reacquisition may have triggered an antibody response that cross-reacted with neural tissues.
The theory that if you can prevent GAS stimulus, you may prevent neuropsychiatric symptoms is supported by a prospective study published in 2002 by Marie Lynd Murphy, M.D., and Michael Pichichero, M.D. (Arch. Pediatr. Adolesc. Med. 2002;156:356-61).
Both the diagnosis and empiric treatment of PANDAS are still controversial. It seems to me that an antibiotic trial could be justified in the face of symptoms that are quite lifestyle altering for the child and family—even if only some of the small subset with evidence of GAS improve.
However, I'm not yet ready to give intravenous immunoglobulin or order plasmapheresis without a defined investigational protocol.
▸ Fever and petechiae. We immediately think of meningococcemia in a child with fever and petechiae (and so we should), even though GAS is actually more likely. Ray Baker, M.D., and his colleagues found Neisseria meningitides in 13 (6.8%) of 190 children with fever and petechial rash (8/13 had meningitis), compared with GAS in 10%. No pathogen was found in 72% (Pediatrics 1989;84:1051-5).
Using these data can be tricky. I think we should consider GAS in relatively well-looking febrile children with only a few scattered petechiae and tonsillitis or pharyngitis. If a throat culture or rapid antigen test is positive, immediate hospitalization may not be necessary. Of course, hospitalization and full work-up are necessary if the child looks sick, has more than scattered petechiae or any purpura, or if meningococcus has been in the community lately.
The main clinical use of these data may be to obtain a throat culture before starting antibiotics for presumed meningococcus in the fever/petechiae case.
▸Joint pain and fever. It seems that we are seeing more children who have fever, arthralgias and elevated sedimentation rates and C-reactive protein values, but who don't meet the Jones criteria for rheumatic fever. That doesn't mean they don't have poststreptococcal disease.
The Jones criteria for rheumatic fever, first established in 1944 and revised most recently in 1992 (JAMA 1992;268:2069-73), require evidence of antecedent GAS infection along with either two or more major criteria (carditis, polyarthritis, chorea, erythema marginatum, subcutaneous nodules), or one major criterion plus at least two minor criteria (fever, arthralgia, previous rheumatic fever or rheumatic heart disease, elevated acute phase reactants, prolonged PR interval).
This definition leaves us with a conundrum: what to do with the child who has two or more of the minor criteria but none of the major ones, particularly if the child has a single joint arthritis. These may be post-GAS syndromes. Or could the child have some other arthritis that coincidentally occurred following GAS?
Further, do these children need more than 10 days of penicillin (up to a year)? Without prophylaxis, some who initially had an autoimmune joint flare-up without classic carditis or polyarthritis may convert to full-blown rheumatic fever the next time they're exposed to GAS.
It seems reasonable to put such children on prophylaxis for 12 months, especially during the winter GAS season. If the joint symptoms recur on adequate GAS prophylaxis, you can be more confident that it's not due to GAS and therefore should be referred to a rheumatologist. If the child develops some evidence of valvular abnormality over the year of prophylaxis, then it's an atypical case of rheumatic fever.
For your viewing pleasure …” as Rod Serling once said, I invite you to peruse four alternative group A streptococcus presentations that might not be so obvious at first and for which the approach may be controversial:
▸ Urticaria. Hives may be due to group A streptococcus (GAS), developing even while the patient is on effective anti-GAS treatment. This appears to be an atypical host-specific response and often occurs in children who develop hives in response to other stimuli as well. Unfortunately, the literature on this is mostly anecdotal.
We recently saw a 15-year-old who developed hives 3 days into amoxicillin therapy for GAS pharyngitis. Amoxicillin was changed to azithromycin, but the urticaria intensified. After switching her to two other classes of antibiotics, we deduced that the urticaria wasn't a drug reaction, but a reaction to GAS itself.
Another scenario is the child with recurrent urticaria. Elevated antistreptolysin-O (ASO) or anti-DNase B titers or evidence of GAS in the pharynx via a rapid antigen test or throat culture are indications to try empiric GAS therapy. If the urticaria goes away and stays away, you've solved the problem.
Look for GAS if hives occur more than three times in 6 months without another known trigger, even if the child has no signs of pharyngitis. If it looks like GAS is involved, consider 3 months of prophylactic penicillin in these select patients, particularly during the winter months, when reexposure is most likely.
▸ Movement disorders. It's not very common, but if a child suddenly develops tics or obsessive-compulsive behaviors, check for GAS.
Pediatric autoimmune neuropsychiatric disorders associated with streptococcal infection (PANDAS) was first described in the 1990's by Susan Swedo, M.D., and her colleagues at the National Institute of Mental Health based on five criteria: presence of obsessive-compulsive disorder (OCD) and/or a tic disorder; prepubertal symptom onset; episodic symptom severity; GAS association; and associated neurologic abnormalities (Am. J. Psychiatry 1998;155:264-71).
We recently saw a child with a sudden onset of tics after a febrile illness. Rheumatic fever was considered and the anti-DNase B was elevated. He did not meet the familiar modified Jones criteria. His repetitive hand movements were not really chorea and he had facial tics as well. After 10 days of penicillin, his tics went away, but some unusual facial movements remained for another month.
Six weeks after stopping penicillin, he developed OCD symptoms, which in turn disappeared after 6 weeks of amoxicillin prophylaxis. He continues symptom free on amoxicillin.
One wonders if he might have been reexposed to streptococcus after the initial penicillin; and, while he didn't subsequently develop clinical pharyngitis, GAS reacquisition may have triggered an antibody response that cross-reacted with neural tissues.
The theory that if you can prevent GAS stimulus, you may prevent neuropsychiatric symptoms is supported by a prospective study published in 2002 by Marie Lynd Murphy, M.D., and Michael Pichichero, M.D. (Arch. Pediatr. Adolesc. Med. 2002;156:356-61).
Both the diagnosis and empiric treatment of PANDAS are still controversial. It seems to me that an antibiotic trial could be justified in the face of symptoms that are quite lifestyle altering for the child and family—even if only some of the small subset with evidence of GAS improve.
However, I'm not yet ready to give intravenous immunoglobulin or order plasmapheresis without a defined investigational protocol.
▸ Fever and petechiae. We immediately think of meningococcemia in a child with fever and petechiae (and so we should), even though GAS is actually more likely. Ray Baker, M.D., and his colleagues found Neisseria meningitides in 13 (6.8%) of 190 children with fever and petechial rash (8/13 had meningitis), compared with GAS in 10%. No pathogen was found in 72% (Pediatrics 1989;84:1051-5).
Using these data can be tricky. I think we should consider GAS in relatively well-looking febrile children with only a few scattered petechiae and tonsillitis or pharyngitis. If a throat culture or rapid antigen test is positive, immediate hospitalization may not be necessary. Of course, hospitalization and full work-up are necessary if the child looks sick, has more than scattered petechiae or any purpura, or if meningococcus has been in the community lately.
The main clinical use of these data may be to obtain a throat culture before starting antibiotics for presumed meningococcus in the fever/petechiae case.
▸Joint pain and fever. It seems that we are seeing more children who have fever, arthralgias and elevated sedimentation rates and C-reactive protein values, but who don't meet the Jones criteria for rheumatic fever. That doesn't mean they don't have poststreptococcal disease.
The Jones criteria for rheumatic fever, first established in 1944 and revised most recently in 1992 (JAMA 1992;268:2069-73), require evidence of antecedent GAS infection along with either two or more major criteria (carditis, polyarthritis, chorea, erythema marginatum, subcutaneous nodules), or one major criterion plus at least two minor criteria (fever, arthralgia, previous rheumatic fever or rheumatic heart disease, elevated acute phase reactants, prolonged PR interval).
This definition leaves us with a conundrum: what to do with the child who has two or more of the minor criteria but none of the major ones, particularly if the child has a single joint arthritis. These may be post-GAS syndromes. Or could the child have some other arthritis that coincidentally occurred following GAS?
Further, do these children need more than 10 days of penicillin (up to a year)? Without prophylaxis, some who initially had an autoimmune joint flare-up without classic carditis or polyarthritis may convert to full-blown rheumatic fever the next time they're exposed to GAS.
It seems reasonable to put such children on prophylaxis for 12 months, especially during the winter GAS season. If the joint symptoms recur on adequate GAS prophylaxis, you can be more confident that it's not due to GAS and therefore should be referred to a rheumatologist. If the child develops some evidence of valvular abnormality over the year of prophylaxis, then it's an atypical case of rheumatic fever.