User login
NSAIDs for bronchitis
’Tis the season to be coughing.
The most common condition we are seeing and will be seeing in the coming months is bronchitis. Bronchitis is a self-limited inflammation of the bronchi due to upper airway infection (i.e., cough without pneumonia), which is most commonly viral in etiology. Antibiotics are not recommended for treatment.
Many of our patients will be making appointments to see us when they hit 10-14 days without improvement. But remember that the cough from bronchitis can last up to 4 weeks or more. Reports indicate that more than 60%-90% percent of patients with acute bronchitis who seek care receive antibiotics. Furthermore, 75% of all antibiotic prescriptions are written for upper respiratory infections – yet most patients, if not all, do not need them.
Many of our patients will say that they have tried the usual over-the-counter remedies, which can ruin the best-laid plans for conservative management. But have they tried ibuprofen? (Assuming there is no contraindication, of course.)
Dr. Carl Llor and his colleagues recently published a randomized, blinded clinical trial evaluating the comparative efficacy of an anti-inflammatory, antibiotic, or placebo in the resolution of cough in patients with bronchitis (BMJ 2013 Oct. 4;347:f5762).
Adults aged 18-70 years were eligible to be randomized if they were presenting with a respiratory tract infection less than 1 week in duration and had cough, discolored sputum, and at least one of three symptoms: dyspnea, wheezing, or chest discomfort or chest pain. Subjects were randomized to ibuprofen 600 mg three times a day, amoxicillin-clavulanic acid 500 mg/125 mg three times a day, or placebo three times a day. Treatment was given for 10 days.
The median number of days with frequent cough was numerically lower, but not statistically significantly lower, in the ibuprofen group (9 days; 95% CI: 8-10 days), compared with participants receiving antibiotics (11 days; 95% CI: 10-12 days) or placebo (11 days; 95% CI: 8-14 days). Adverse events were more common in the antibiotic arm (12%), compared with ibuprofen or placebo (5% and 3%, respectively, P = .008).
Other nonantibiotic cough remedies have been evaluated in the treatment of patients presenting with cough. Inhaled fluticasone may be effective, but the cost might be prohibitive for many patients.
For ibuprofen, the price is right – and it may buy us some time before we feel compelled to prescribe antibiotics.
Dr. Ebbert is a professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
’Tis the season to be coughing.
The most common condition we are seeing and will be seeing in the coming months is bronchitis. Bronchitis is a self-limited inflammation of the bronchi due to upper airway infection (i.e., cough without pneumonia), which is most commonly viral in etiology. Antibiotics are not recommended for treatment.
Many of our patients will be making appointments to see us when they hit 10-14 days without improvement. But remember that the cough from bronchitis can last up to 4 weeks or more. Reports indicate that more than 60%-90% percent of patients with acute bronchitis who seek care receive antibiotics. Furthermore, 75% of all antibiotic prescriptions are written for upper respiratory infections – yet most patients, if not all, do not need them.
Many of our patients will say that they have tried the usual over-the-counter remedies, which can ruin the best-laid plans for conservative management. But have they tried ibuprofen? (Assuming there is no contraindication, of course.)
Dr. Carl Llor and his colleagues recently published a randomized, blinded clinical trial evaluating the comparative efficacy of an anti-inflammatory, antibiotic, or placebo in the resolution of cough in patients with bronchitis (BMJ 2013 Oct. 4;347:f5762).
Adults aged 18-70 years were eligible to be randomized if they were presenting with a respiratory tract infection less than 1 week in duration and had cough, discolored sputum, and at least one of three symptoms: dyspnea, wheezing, or chest discomfort or chest pain. Subjects were randomized to ibuprofen 600 mg three times a day, amoxicillin-clavulanic acid 500 mg/125 mg three times a day, or placebo three times a day. Treatment was given for 10 days.
The median number of days with frequent cough was numerically lower, but not statistically significantly lower, in the ibuprofen group (9 days; 95% CI: 8-10 days), compared with participants receiving antibiotics (11 days; 95% CI: 10-12 days) or placebo (11 days; 95% CI: 8-14 days). Adverse events were more common in the antibiotic arm (12%), compared with ibuprofen or placebo (5% and 3%, respectively, P = .008).
Other nonantibiotic cough remedies have been evaluated in the treatment of patients presenting with cough. Inhaled fluticasone may be effective, but the cost might be prohibitive for many patients.
For ibuprofen, the price is right – and it may buy us some time before we feel compelled to prescribe antibiotics.
Dr. Ebbert is a professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
’Tis the season to be coughing.
The most common condition we are seeing and will be seeing in the coming months is bronchitis. Bronchitis is a self-limited inflammation of the bronchi due to upper airway infection (i.e., cough without pneumonia), which is most commonly viral in etiology. Antibiotics are not recommended for treatment.
Many of our patients will be making appointments to see us when they hit 10-14 days without improvement. But remember that the cough from bronchitis can last up to 4 weeks or more. Reports indicate that more than 60%-90% percent of patients with acute bronchitis who seek care receive antibiotics. Furthermore, 75% of all antibiotic prescriptions are written for upper respiratory infections – yet most patients, if not all, do not need them.
Many of our patients will say that they have tried the usual over-the-counter remedies, which can ruin the best-laid plans for conservative management. But have they tried ibuprofen? (Assuming there is no contraindication, of course.)
Dr. Carl Llor and his colleagues recently published a randomized, blinded clinical trial evaluating the comparative efficacy of an anti-inflammatory, antibiotic, or placebo in the resolution of cough in patients with bronchitis (BMJ 2013 Oct. 4;347:f5762).
Adults aged 18-70 years were eligible to be randomized if they were presenting with a respiratory tract infection less than 1 week in duration and had cough, discolored sputum, and at least one of three symptoms: dyspnea, wheezing, or chest discomfort or chest pain. Subjects were randomized to ibuprofen 600 mg three times a day, amoxicillin-clavulanic acid 500 mg/125 mg three times a day, or placebo three times a day. Treatment was given for 10 days.
The median number of days with frequent cough was numerically lower, but not statistically significantly lower, in the ibuprofen group (9 days; 95% CI: 8-10 days), compared with participants receiving antibiotics (11 days; 95% CI: 10-12 days) or placebo (11 days; 95% CI: 8-14 days). Adverse events were more common in the antibiotic arm (12%), compared with ibuprofen or placebo (5% and 3%, respectively, P = .008).
Other nonantibiotic cough remedies have been evaluated in the treatment of patients presenting with cough. Inhaled fluticasone may be effective, but the cost might be prohibitive for many patients.
For ibuprofen, the price is right – and it may buy us some time before we feel compelled to prescribe antibiotics.
Dr. Ebbert is a professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
Green tea to control blood sugars
The patient in your office is a 44-year-old male on no medications with no medical problems. He is extremely anxious about the test results you just reported to him. The last thing you need at the end of an exhausting week is one of the "worried well." But maybe you can help.
The patient’s blood sugar is 119. He exercises daily, has a "walking workstation" in his office, and has a BMI of 23.5. You are diplomatic and compassionate as you give your speech about impaired fasting glucose. He tells you he does not want to take any medications and wonders what he can do to reduce his blood glucose level.
Liu and colleagues conducted and recently published a systematic review evaluating the effect of green tea on glucose control and insulin sensitivity. Included studies were randomized controlled trials evaluating the effects of green tea and green tea extract on glucose control and insulin sensitivity. Investigators identified 17 trials comprising a total of 1,133 subjects. Green tea consumption significantly reduced the fasting glucose and hemoglobin A1c concentrations by –1.62 (P less than .01) and –0.30% (P less than .01), respectively.
Green tea, or Camellia sinensis, is a rich source of flavonols. Catechins are the predominant form of flavanols. The most abundant and most studied catechin in green tea is epigallocatechin gallate (EGCG). EGCG is the most pharmacologically active compound in green tea with the largest beneficial health effects. EGCG has been observed to decrease fat mass, decrease endothelial cell dysfunction, improve insulin sensitivity, decrease cholesterol absorption, and improve nonalcoholic fatty liver disease.
So how much green tea should somebody consume to reduce blood sugars? Another meta-analysis of 324,141 participants and 11,400 incident cases of type 2 diabetes suggested that individuals who drank about 4 cups of tea per day had a 20% lower risk of type 2 diabetes compared with those who drank less or none.
Green tea is an acquired taste. But you feel confident that this patient will quickly acquire it. As you dismiss the patient, you walk toward the microwave to heat up another cup of green tea.
Dr. Ebbert is professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
The patient in your office is a 44-year-old male on no medications with no medical problems. He is extremely anxious about the test results you just reported to him. The last thing you need at the end of an exhausting week is one of the "worried well." But maybe you can help.
The patient’s blood sugar is 119. He exercises daily, has a "walking workstation" in his office, and has a BMI of 23.5. You are diplomatic and compassionate as you give your speech about impaired fasting glucose. He tells you he does not want to take any medications and wonders what he can do to reduce his blood glucose level.
Liu and colleagues conducted and recently published a systematic review evaluating the effect of green tea on glucose control and insulin sensitivity. Included studies were randomized controlled trials evaluating the effects of green tea and green tea extract on glucose control and insulin sensitivity. Investigators identified 17 trials comprising a total of 1,133 subjects. Green tea consumption significantly reduced the fasting glucose and hemoglobin A1c concentrations by –1.62 (P less than .01) and –0.30% (P less than .01), respectively.
Green tea, or Camellia sinensis, is a rich source of flavonols. Catechins are the predominant form of flavanols. The most abundant and most studied catechin in green tea is epigallocatechin gallate (EGCG). EGCG is the most pharmacologically active compound in green tea with the largest beneficial health effects. EGCG has been observed to decrease fat mass, decrease endothelial cell dysfunction, improve insulin sensitivity, decrease cholesterol absorption, and improve nonalcoholic fatty liver disease.
So how much green tea should somebody consume to reduce blood sugars? Another meta-analysis of 324,141 participants and 11,400 incident cases of type 2 diabetes suggested that individuals who drank about 4 cups of tea per day had a 20% lower risk of type 2 diabetes compared with those who drank less or none.
Green tea is an acquired taste. But you feel confident that this patient will quickly acquire it. As you dismiss the patient, you walk toward the microwave to heat up another cup of green tea.
Dr. Ebbert is professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
The patient in your office is a 44-year-old male on no medications with no medical problems. He is extremely anxious about the test results you just reported to him. The last thing you need at the end of an exhausting week is one of the "worried well." But maybe you can help.
The patient’s blood sugar is 119. He exercises daily, has a "walking workstation" in his office, and has a BMI of 23.5. You are diplomatic and compassionate as you give your speech about impaired fasting glucose. He tells you he does not want to take any medications and wonders what he can do to reduce his blood glucose level.
Liu and colleagues conducted and recently published a systematic review evaluating the effect of green tea on glucose control and insulin sensitivity. Included studies were randomized controlled trials evaluating the effects of green tea and green tea extract on glucose control and insulin sensitivity. Investigators identified 17 trials comprising a total of 1,133 subjects. Green tea consumption significantly reduced the fasting glucose and hemoglobin A1c concentrations by –1.62 (P less than .01) and –0.30% (P less than .01), respectively.
Green tea, or Camellia sinensis, is a rich source of flavonols. Catechins are the predominant form of flavanols. The most abundant and most studied catechin in green tea is epigallocatechin gallate (EGCG). EGCG is the most pharmacologically active compound in green tea with the largest beneficial health effects. EGCG has been observed to decrease fat mass, decrease endothelial cell dysfunction, improve insulin sensitivity, decrease cholesterol absorption, and improve nonalcoholic fatty liver disease.
So how much green tea should somebody consume to reduce blood sugars? Another meta-analysis of 324,141 participants and 11,400 incident cases of type 2 diabetes suggested that individuals who drank about 4 cups of tea per day had a 20% lower risk of type 2 diabetes compared with those who drank less or none.
Green tea is an acquired taste. But you feel confident that this patient will quickly acquire it. As you dismiss the patient, you walk toward the microwave to heat up another cup of green tea.
Dr. Ebbert is professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
Nothing magical about cannabis
Perhaps some of us have had an experience similar to this: A patient with complex medical problems on multiple medications tells you that of all the "medications" he or she takes, cannabis (which you are not prescribing) works the best ... for everything.
I live in a state that does not allow for the use of medical or recreational cannabis, and this happens to me. I can only imagine how frequently this happens to practitioners in states that allow it.
Cannabis (or marijuana) is composed of almost 90 cannabinoids. The psychoactive cannabinoid (tetrahydrocannabinol or THC) is associated with fewer therapeutic possibilities than other constituents, such as cannabidiol (CBD). But we don’t know very much about it. Part of the problem is that it is a schedule I drug, which means that we cannot easily study it.
According to recently released data from the 2012 National Survey on Drug Use and Health (NSDUH), cannabis use continues to increase among U.S. individuals aged 12 years and older. An estimated 31.5 million residents reported using cannabis in the past year, compared to approximately 25 million each year from 2002 to 2008.
The legalization of cannabis should be a sociopolitical debate, not a medico-scientific one. The science is clear. Cannabis is a drug ... and not a clean one. And it’s associated with important and significant central nervous system effects, especially in young adults.
Unfortunately, controversy about long-term adverse effects of cannabis still exists despite outstanding data such as that published by Meier and colleagues. Data were collected from a prospective cohort of 1,037 individuals followed from birth (in the years 1972 and 1973) to age 38 years. Thorough neuropsychological testing was done twice: once at 13 years of age before the start of cannabis use and again at 38 years of age after people tend to have developed persistent cannabis use (PNAS 2012;109:E2657-E2664).
The investigators found that persistent use of cannabis was associated with significant declines in neuropsychological function, and the greatest impairments were in the domains of executive functioning and processing speed. Interestingly, deficits could be perceived by social contacts who were asked to report on distractibility and memory problems. As expected, deficits were greatest for persistent users. Of greatest concern, stopping cannabis use did not clearly restore cognitive skills.
Neurophysiological research tells us that the brain continues to develop to 25 years of age. This study supports a neurotoxic effect of cannabis during this critical period of development.
My patient was older than the cohort in this study, and this study does not inform us about the impact of cannabis on older brains. While addressing my patient’s possible drug dependency, I will also work on my other treatments so she may be less likely to resort to an illegal drug to alleviate her symptoms.
Dr. Ebbert is professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
Perhaps some of us have had an experience similar to this: A patient with complex medical problems on multiple medications tells you that of all the "medications" he or she takes, cannabis (which you are not prescribing) works the best ... for everything.
I live in a state that does not allow for the use of medical or recreational cannabis, and this happens to me. I can only imagine how frequently this happens to practitioners in states that allow it.
Cannabis (or marijuana) is composed of almost 90 cannabinoids. The psychoactive cannabinoid (tetrahydrocannabinol or THC) is associated with fewer therapeutic possibilities than other constituents, such as cannabidiol (CBD). But we don’t know very much about it. Part of the problem is that it is a schedule I drug, which means that we cannot easily study it.
According to recently released data from the 2012 National Survey on Drug Use and Health (NSDUH), cannabis use continues to increase among U.S. individuals aged 12 years and older. An estimated 31.5 million residents reported using cannabis in the past year, compared to approximately 25 million each year from 2002 to 2008.
The legalization of cannabis should be a sociopolitical debate, not a medico-scientific one. The science is clear. Cannabis is a drug ... and not a clean one. And it’s associated with important and significant central nervous system effects, especially in young adults.
Unfortunately, controversy about long-term adverse effects of cannabis still exists despite outstanding data such as that published by Meier and colleagues. Data were collected from a prospective cohort of 1,037 individuals followed from birth (in the years 1972 and 1973) to age 38 years. Thorough neuropsychological testing was done twice: once at 13 years of age before the start of cannabis use and again at 38 years of age after people tend to have developed persistent cannabis use (PNAS 2012;109:E2657-E2664).
The investigators found that persistent use of cannabis was associated with significant declines in neuropsychological function, and the greatest impairments were in the domains of executive functioning and processing speed. Interestingly, deficits could be perceived by social contacts who were asked to report on distractibility and memory problems. As expected, deficits were greatest for persistent users. Of greatest concern, stopping cannabis use did not clearly restore cognitive skills.
Neurophysiological research tells us that the brain continues to develop to 25 years of age. This study supports a neurotoxic effect of cannabis during this critical period of development.
My patient was older than the cohort in this study, and this study does not inform us about the impact of cannabis on older brains. While addressing my patient’s possible drug dependency, I will also work on my other treatments so she may be less likely to resort to an illegal drug to alleviate her symptoms.
Dr. Ebbert is professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
Perhaps some of us have had an experience similar to this: A patient with complex medical problems on multiple medications tells you that of all the "medications" he or she takes, cannabis (which you are not prescribing) works the best ... for everything.
I live in a state that does not allow for the use of medical or recreational cannabis, and this happens to me. I can only imagine how frequently this happens to practitioners in states that allow it.
Cannabis (or marijuana) is composed of almost 90 cannabinoids. The psychoactive cannabinoid (tetrahydrocannabinol or THC) is associated with fewer therapeutic possibilities than other constituents, such as cannabidiol (CBD). But we don’t know very much about it. Part of the problem is that it is a schedule I drug, which means that we cannot easily study it.
According to recently released data from the 2012 National Survey on Drug Use and Health (NSDUH), cannabis use continues to increase among U.S. individuals aged 12 years and older. An estimated 31.5 million residents reported using cannabis in the past year, compared to approximately 25 million each year from 2002 to 2008.
The legalization of cannabis should be a sociopolitical debate, not a medico-scientific one. The science is clear. Cannabis is a drug ... and not a clean one. And it’s associated with important and significant central nervous system effects, especially in young adults.
Unfortunately, controversy about long-term adverse effects of cannabis still exists despite outstanding data such as that published by Meier and colleagues. Data were collected from a prospective cohort of 1,037 individuals followed from birth (in the years 1972 and 1973) to age 38 years. Thorough neuropsychological testing was done twice: once at 13 years of age before the start of cannabis use and again at 38 years of age after people tend to have developed persistent cannabis use (PNAS 2012;109:E2657-E2664).
The investigators found that persistent use of cannabis was associated with significant declines in neuropsychological function, and the greatest impairments were in the domains of executive functioning and processing speed. Interestingly, deficits could be perceived by social contacts who were asked to report on distractibility and memory problems. As expected, deficits were greatest for persistent users. Of greatest concern, stopping cannabis use did not clearly restore cognitive skills.
Neurophysiological research tells us that the brain continues to develop to 25 years of age. This study supports a neurotoxic effect of cannabis during this critical period of development.
My patient was older than the cohort in this study, and this study does not inform us about the impact of cannabis on older brains. While addressing my patient’s possible drug dependency, I will also work on my other treatments so she may be less likely to resort to an illegal drug to alleviate her symptoms.
Dr. Ebbert is professor of medicine, a general internist, and a diplomate of the American Board of Addiction Medicine who works at the Mayo Clinic in Rochester, Minn. The opinions expressed are those of the author.
What should physicians say about electronic cigarettes?
If you haven’t already been asked about them, you will be. So, you need to have an answer. Maybe this will help.
Electronic cigarettes (or "e-cigarettes") were reportedly invented by a Chinese pharmacist who wanted to find a safer way for smokers to inhale nicotine, after his father, a cigarette smoker, died from lung cancer. The basic e-cigarette design is a lithium battery attached to a heating element that vaporizes a solution of either propylene glycol or vegetable glycerin and liquid nicotine. Vaporization allows for inhalation, referred to as "vaping" as opposed to smoking.
There should be little debate about whether smokers should have access to these products. They do right now, and the products are unlikely to be banned in the near future, although age restriction is a moving target. Debating access seems nonproductive and a distraction from the real discussion.
The real discussion should focus on whether public health and medical health professionals should be recommending them for treatment.
Two important questions need to be answered:
• First, are they safe? Answer: We have no long-term safety data on the impact of repeated inhalation of propylene glycol or vegetable glycerin on lung tissue. Some short-term data suggest that e-cigarettes may cause airway irritation.
• Second, are they effective for increasing smoking cessation?
To help answer the second question, Dr. Christopher Bullen and his colleagues published a randomized, controlled clinical trial evaluating the comparative efficacy of 16-mg nicotine e-cigarettes, nicotine patches (21-mg patch, one daily), or placebo e-cigarettes (no nicotine) (Lancet 2013 Sept. 9 [doi:10.1016/S0140-6736(13)61842-5]). Potential participants were eligible for enrollment if they were at least 18 years of age, had smoked at least 10 cigarettes per day, and wanted to stop smoking. All participants were referred to the telephone quit line for behavioral counseling. Participants were treated for 13 weeks.
At 6 months, smoking abstinence was 7.3% with nicotine e-cigarettes, 5.8% with the nicotine patches, and 4.1% with placebo e-cigarettes. The risk difference for nicotine e-cigarettes vs. patches was 1.51; for nicotine e-cigarettes vs. placebo e-cigarettes, it was 3.16. Neither difference was statistically significant. Interestingly, e-cigarettes were associated with greater reductions in cigarette smoking, compared with nicotine patches. (None of the study’s authors reported having any relevant conflicts of interest.)
So, back to the second point. E-cigarettes are clearly not superior to nicotine patches, but this study may have been underpowered because absolute abstinence rates were low.
Currently, e-cigarette manufacturers are spending resources manufacturing and marketing rather than assisting in the creation of reliable scientific data or the creation of an international research agenda on these products.
One day, an e-cigarette device may be part of a clinical treatment program for tobacco dependence. But until that day, clinicians need to be justifiably circumspect in recommending e-cigarettes for use among cigarette smokers.
Why? Because:
• They are not clearly superior to Food and Drug Administration–approved medications for smoking cessation.
• They are not FDA approved for treatment.
• Short-term safety data suggest they may cause airway irritation.
• Long-term safety data do not exist.
• Smoking reduction is arguably not a relevant clinical outcome, because a significant increase in tobacco-related risk occurs at low levels of exposure.
For clinicians, treatment recommendations are married to the responsibility for unintended consequences. What are those unintended consequences with electronic cigarettes? We need more data.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. Dr. Ebbert has received consulting fees from GlaxoSmithKline, manufacturer of nicotine replacement products, and research support from Pfizer, manufacturer of varenicline. The opinions expressed are those of the author.
If you haven’t already been asked about them, you will be. So, you need to have an answer. Maybe this will help.
Electronic cigarettes (or "e-cigarettes") were reportedly invented by a Chinese pharmacist who wanted to find a safer way for smokers to inhale nicotine, after his father, a cigarette smoker, died from lung cancer. The basic e-cigarette design is a lithium battery attached to a heating element that vaporizes a solution of either propylene glycol or vegetable glycerin and liquid nicotine. Vaporization allows for inhalation, referred to as "vaping" as opposed to smoking.
There should be little debate about whether smokers should have access to these products. They do right now, and the products are unlikely to be banned in the near future, although age restriction is a moving target. Debating access seems nonproductive and a distraction from the real discussion.
The real discussion should focus on whether public health and medical health professionals should be recommending them for treatment.
Two important questions need to be answered:
• First, are they safe? Answer: We have no long-term safety data on the impact of repeated inhalation of propylene glycol or vegetable glycerin on lung tissue. Some short-term data suggest that e-cigarettes may cause airway irritation.
• Second, are they effective for increasing smoking cessation?
To help answer the second question, Dr. Christopher Bullen and his colleagues published a randomized, controlled clinical trial evaluating the comparative efficacy of 16-mg nicotine e-cigarettes, nicotine patches (21-mg patch, one daily), or placebo e-cigarettes (no nicotine) (Lancet 2013 Sept. 9 [doi:10.1016/S0140-6736(13)61842-5]). Potential participants were eligible for enrollment if they were at least 18 years of age, had smoked at least 10 cigarettes per day, and wanted to stop smoking. All participants were referred to the telephone quit line for behavioral counseling. Participants were treated for 13 weeks.
At 6 months, smoking abstinence was 7.3% with nicotine e-cigarettes, 5.8% with the nicotine patches, and 4.1% with placebo e-cigarettes. The risk difference for nicotine e-cigarettes vs. patches was 1.51; for nicotine e-cigarettes vs. placebo e-cigarettes, it was 3.16. Neither difference was statistically significant. Interestingly, e-cigarettes were associated with greater reductions in cigarette smoking, compared with nicotine patches. (None of the study’s authors reported having any relevant conflicts of interest.)
So, back to the second point. E-cigarettes are clearly not superior to nicotine patches, but this study may have been underpowered because absolute abstinence rates were low.
Currently, e-cigarette manufacturers are spending resources manufacturing and marketing rather than assisting in the creation of reliable scientific data or the creation of an international research agenda on these products.
One day, an e-cigarette device may be part of a clinical treatment program for tobacco dependence. But until that day, clinicians need to be justifiably circumspect in recommending e-cigarettes for use among cigarette smokers.
Why? Because:
• They are not clearly superior to Food and Drug Administration–approved medications for smoking cessation.
• They are not FDA approved for treatment.
• Short-term safety data suggest they may cause airway irritation.
• Long-term safety data do not exist.
• Smoking reduction is arguably not a relevant clinical outcome, because a significant increase in tobacco-related risk occurs at low levels of exposure.
For clinicians, treatment recommendations are married to the responsibility for unintended consequences. What are those unintended consequences with electronic cigarettes? We need more data.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. Dr. Ebbert has received consulting fees from GlaxoSmithKline, manufacturer of nicotine replacement products, and research support from Pfizer, manufacturer of varenicline. The opinions expressed are those of the author.
If you haven’t already been asked about them, you will be. So, you need to have an answer. Maybe this will help.
Electronic cigarettes (or "e-cigarettes") were reportedly invented by a Chinese pharmacist who wanted to find a safer way for smokers to inhale nicotine, after his father, a cigarette smoker, died from lung cancer. The basic e-cigarette design is a lithium battery attached to a heating element that vaporizes a solution of either propylene glycol or vegetable glycerin and liquid nicotine. Vaporization allows for inhalation, referred to as "vaping" as opposed to smoking.
There should be little debate about whether smokers should have access to these products. They do right now, and the products are unlikely to be banned in the near future, although age restriction is a moving target. Debating access seems nonproductive and a distraction from the real discussion.
The real discussion should focus on whether public health and medical health professionals should be recommending them for treatment.
Two important questions need to be answered:
• First, are they safe? Answer: We have no long-term safety data on the impact of repeated inhalation of propylene glycol or vegetable glycerin on lung tissue. Some short-term data suggest that e-cigarettes may cause airway irritation.
• Second, are they effective for increasing smoking cessation?
To help answer the second question, Dr. Christopher Bullen and his colleagues published a randomized, controlled clinical trial evaluating the comparative efficacy of 16-mg nicotine e-cigarettes, nicotine patches (21-mg patch, one daily), or placebo e-cigarettes (no nicotine) (Lancet 2013 Sept. 9 [doi:10.1016/S0140-6736(13)61842-5]). Potential participants were eligible for enrollment if they were at least 18 years of age, had smoked at least 10 cigarettes per day, and wanted to stop smoking. All participants were referred to the telephone quit line for behavioral counseling. Participants were treated for 13 weeks.
At 6 months, smoking abstinence was 7.3% with nicotine e-cigarettes, 5.8% with the nicotine patches, and 4.1% with placebo e-cigarettes. The risk difference for nicotine e-cigarettes vs. patches was 1.51; for nicotine e-cigarettes vs. placebo e-cigarettes, it was 3.16. Neither difference was statistically significant. Interestingly, e-cigarettes were associated with greater reductions in cigarette smoking, compared with nicotine patches. (None of the study’s authors reported having any relevant conflicts of interest.)
So, back to the second point. E-cigarettes are clearly not superior to nicotine patches, but this study may have been underpowered because absolute abstinence rates were low.
Currently, e-cigarette manufacturers are spending resources manufacturing and marketing rather than assisting in the creation of reliable scientific data or the creation of an international research agenda on these products.
One day, an e-cigarette device may be part of a clinical treatment program for tobacco dependence. But until that day, clinicians need to be justifiably circumspect in recommending e-cigarettes for use among cigarette smokers.
Why? Because:
• They are not clearly superior to Food and Drug Administration–approved medications for smoking cessation.
• They are not FDA approved for treatment.
• Short-term safety data suggest they may cause airway irritation.
• Long-term safety data do not exist.
• Smoking reduction is arguably not a relevant clinical outcome, because a significant increase in tobacco-related risk occurs at low levels of exposure.
For clinicians, treatment recommendations are married to the responsibility for unintended consequences. What are those unintended consequences with electronic cigarettes? We need more data.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. Dr. Ebbert has received consulting fees from GlaxoSmithKline, manufacturer of nicotine replacement products, and research support from Pfizer, manufacturer of varenicline. The opinions expressed are those of the author.
Two calcium channel blockers better than one?
Many of us may have become so adept at treating hypertension that we may believe that, while they can give us new drugs, they really can’t teach us any new tricks.
So, stop me if you’ve heard this one: two calcium channel blockers (CCBs) at the same time.
We are aware that dual-agent therapy is likely more efficacious than up-titration of monotherapy. The antihypertensive effect of two drugs from different classes may be five times greater than that of the doubling of monotherapy. But what about two different drugs of the same class, such as CCBs?
CCBs are either dihydropyridines (DHPs) (that is, amlodipine) or nondihydropyridines (NDHPs) (that is, verapamil). Dr. Carlos Alviar and his colleagues conducted a systematic review evaluating the efficacy and safety of dual CCBs for the treatment of hypertension (Am. J. Hypertens. 2013;26:287-97).
The authors searched for clinical trials published between 1966 and 2012. Included studies were required to be randomized, evaluate dual-agent therapy with monotherapy, use equivalent doses, and treat for longer than 1 week. The primary efficacy outcome was the change in systolic blood pressure (SBP) and diastolic blood pressure (DBP) between study groups. Safety outcomes included the risk of adverse events.
Six studies satisfied inclusion criteria. A significant SBP reduction was observed of 10.9 mm Hg more with dual CCB than with a DHP alone (P less than .01) or 14 mm Hg more than with a NDHP alone (P = .002). Dual CCB therapy reduced DBP 5.5 mm Hg more than did a DHP alone (P less than .001), and by 5.3 mm Hg more than with a NDHP alone (P = .03). Mean heart rate changes from baseline were –4.0, 2.0, and –6.0 beats/minute with dual CCB therapy, DHP, and NDHP, respectively.
No significant increases in edema, headache, or flushing were observed with dual CCB therapy. Notably, constipation was lower with dual CCB than with NDHP alone.
The drugs may be synergistic because of the negative inotropic and chronotropic effects of the NDHP and the vasodilatory effect of the DHP, or they simply could be additive. The National Kidney Foundation Hypertension and Diabetes Executive Committees Working Group includes dual CCB therapy among one of the recommended treatment approaches for hypertension. However, some guidelines do not recommend this approach including the Seventh Report of the Joint National Committee on the Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) and the American Society of Hypertension.
The long-term effects of this combination are uncertain, and the impact of dual CCB therapy on cardiovascular morbidity and mortality beyond that of monotherapy is uncertain. But for select patients with hard to control blood pressure, this approach might be a strategy before trying to find a pharmacy that dispenses methyldopa.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts.
This column, "What Matters," appears regularly in Internal Medicine News, a publication of IMNG Medical Media.
Many of us may have become so adept at treating hypertension that we may believe that, while they can give us new drugs, they really can’t teach us any new tricks.
So, stop me if you’ve heard this one: two calcium channel blockers (CCBs) at the same time.
We are aware that dual-agent therapy is likely more efficacious than up-titration of monotherapy. The antihypertensive effect of two drugs from different classes may be five times greater than that of the doubling of monotherapy. But what about two different drugs of the same class, such as CCBs?
CCBs are either dihydropyridines (DHPs) (that is, amlodipine) or nondihydropyridines (NDHPs) (that is, verapamil). Dr. Carlos Alviar and his colleagues conducted a systematic review evaluating the efficacy and safety of dual CCBs for the treatment of hypertension (Am. J. Hypertens. 2013;26:287-97).
The authors searched for clinical trials published between 1966 and 2012. Included studies were required to be randomized, evaluate dual-agent therapy with monotherapy, use equivalent doses, and treat for longer than 1 week. The primary efficacy outcome was the change in systolic blood pressure (SBP) and diastolic blood pressure (DBP) between study groups. Safety outcomes included the risk of adverse events.
Six studies satisfied inclusion criteria. A significant SBP reduction was observed of 10.9 mm Hg more with dual CCB than with a DHP alone (P less than .01) or 14 mm Hg more than with a NDHP alone (P = .002). Dual CCB therapy reduced DBP 5.5 mm Hg more than did a DHP alone (P less than .001), and by 5.3 mm Hg more than with a NDHP alone (P = .03). Mean heart rate changes from baseline were –4.0, 2.0, and –6.0 beats/minute with dual CCB therapy, DHP, and NDHP, respectively.
No significant increases in edema, headache, or flushing were observed with dual CCB therapy. Notably, constipation was lower with dual CCB than with NDHP alone.
The drugs may be synergistic because of the negative inotropic and chronotropic effects of the NDHP and the vasodilatory effect of the DHP, or they simply could be additive. The National Kidney Foundation Hypertension and Diabetes Executive Committees Working Group includes dual CCB therapy among one of the recommended treatment approaches for hypertension. However, some guidelines do not recommend this approach including the Seventh Report of the Joint National Committee on the Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) and the American Society of Hypertension.
The long-term effects of this combination are uncertain, and the impact of dual CCB therapy on cardiovascular morbidity and mortality beyond that of monotherapy is uncertain. But for select patients with hard to control blood pressure, this approach might be a strategy before trying to find a pharmacy that dispenses methyldopa.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts.
This column, "What Matters," appears regularly in Internal Medicine News, a publication of IMNG Medical Media.
Many of us may have become so adept at treating hypertension that we may believe that, while they can give us new drugs, they really can’t teach us any new tricks.
So, stop me if you’ve heard this one: two calcium channel blockers (CCBs) at the same time.
We are aware that dual-agent therapy is likely more efficacious than up-titration of monotherapy. The antihypertensive effect of two drugs from different classes may be five times greater than that of the doubling of monotherapy. But what about two different drugs of the same class, such as CCBs?
CCBs are either dihydropyridines (DHPs) (that is, amlodipine) or nondihydropyridines (NDHPs) (that is, verapamil). Dr. Carlos Alviar and his colleagues conducted a systematic review evaluating the efficacy and safety of dual CCBs for the treatment of hypertension (Am. J. Hypertens. 2013;26:287-97).
The authors searched for clinical trials published between 1966 and 2012. Included studies were required to be randomized, evaluate dual-agent therapy with monotherapy, use equivalent doses, and treat for longer than 1 week. The primary efficacy outcome was the change in systolic blood pressure (SBP) and diastolic blood pressure (DBP) between study groups. Safety outcomes included the risk of adverse events.
Six studies satisfied inclusion criteria. A significant SBP reduction was observed of 10.9 mm Hg more with dual CCB than with a DHP alone (P less than .01) or 14 mm Hg more than with a NDHP alone (P = .002). Dual CCB therapy reduced DBP 5.5 mm Hg more than did a DHP alone (P less than .001), and by 5.3 mm Hg more than with a NDHP alone (P = .03). Mean heart rate changes from baseline were –4.0, 2.0, and –6.0 beats/minute with dual CCB therapy, DHP, and NDHP, respectively.
No significant increases in edema, headache, or flushing were observed with dual CCB therapy. Notably, constipation was lower with dual CCB than with NDHP alone.
The drugs may be synergistic because of the negative inotropic and chronotropic effects of the NDHP and the vasodilatory effect of the DHP, or they simply could be additive. The National Kidney Foundation Hypertension and Diabetes Executive Committees Working Group includes dual CCB therapy among one of the recommended treatment approaches for hypertension. However, some guidelines do not recommend this approach including the Seventh Report of the Joint National Committee on the Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) and the American Society of Hypertension.
The long-term effects of this combination are uncertain, and the impact of dual CCB therapy on cardiovascular morbidity and mortality beyond that of monotherapy is uncertain. But for select patients with hard to control blood pressure, this approach might be a strategy before trying to find a pharmacy that dispenses methyldopa.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts.
This column, "What Matters," appears regularly in Internal Medicine News, a publication of IMNG Medical Media.
Home visits to reduce falls
A couple of my patients had a rough summer. In the past month, two of my elderly patients have fallen and broken their hips. Approximately 30% of community-dwelling people aged 65 years and older experience at least one fall per year, and falls are the leading cause of home injury deaths among adults aged 80 years or older.
Although I have been diligent about assessing bone mineral densities and addressing gait abnormalities, these two accidents happened. While reading through the emergency department and surgical notes to monitor my patients’ progress, I felt some despair and resorted to musings about what else could have been done to prevent these outcomes.
The medical literature suggests that multimodality approaches to the assessment and reduction of risk factors can reduce the rates of falls. Home visits provide a unique "inside" view of potential risk factors for falls that can be addressed prior to a patient breaking a hip. Should I have gone to their homes to prevent this?
Tobias Luck of the University of Leipzig and his colleagues conducted a multicenter, randomized clinical trial assessing the efficacy of a home visit intervention in a sample of community-dwelling individuals aged 80 years and older in Germany. Individuals who met the age criteria, were living at home, and had functional impairment of three or more activities of daily living were eligible for enrollment. All participants received baseline interviews in their homes (Clin. Interv. Aging 2013;8:697-702).
Participants in the intervention group underwent an analysis by multidisciplinary teams, individualized interventions were developed, and home counseling was conducted. A booster session was provided to the intervention group. The primary outcome was the incidence of institutionalization over the study period of 18 months.
Analyses were based upon 230 participants who remained in the study (112 control patients, 118 intervention patients). A significant decrease in the number of falls from baseline to follow-up was observed in the intervention group (incidence rate ratio, 0.63) and a significant increase was observed in the control group (incidence rate ratio, 1.96).
Data suggest that multifactorial interventions to prevent falls work. The challenge for us is to see how we can channel the resources dedicated to office practice toward conducting home visits. In a capitated environment, this may become increasingly easy to justify.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author.
A couple of my patients had a rough summer. In the past month, two of my elderly patients have fallen and broken their hips. Approximately 30% of community-dwelling people aged 65 years and older experience at least one fall per year, and falls are the leading cause of home injury deaths among adults aged 80 years or older.
Although I have been diligent about assessing bone mineral densities and addressing gait abnormalities, these two accidents happened. While reading through the emergency department and surgical notes to monitor my patients’ progress, I felt some despair and resorted to musings about what else could have been done to prevent these outcomes.
The medical literature suggests that multimodality approaches to the assessment and reduction of risk factors can reduce the rates of falls. Home visits provide a unique "inside" view of potential risk factors for falls that can be addressed prior to a patient breaking a hip. Should I have gone to their homes to prevent this?
Tobias Luck of the University of Leipzig and his colleagues conducted a multicenter, randomized clinical trial assessing the efficacy of a home visit intervention in a sample of community-dwelling individuals aged 80 years and older in Germany. Individuals who met the age criteria, were living at home, and had functional impairment of three or more activities of daily living were eligible for enrollment. All participants received baseline interviews in their homes (Clin. Interv. Aging 2013;8:697-702).
Participants in the intervention group underwent an analysis by multidisciplinary teams, individualized interventions were developed, and home counseling was conducted. A booster session was provided to the intervention group. The primary outcome was the incidence of institutionalization over the study period of 18 months.
Analyses were based upon 230 participants who remained in the study (112 control patients, 118 intervention patients). A significant decrease in the number of falls from baseline to follow-up was observed in the intervention group (incidence rate ratio, 0.63) and a significant increase was observed in the control group (incidence rate ratio, 1.96).
Data suggest that multifactorial interventions to prevent falls work. The challenge for us is to see how we can channel the resources dedicated to office practice toward conducting home visits. In a capitated environment, this may become increasingly easy to justify.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author.
A couple of my patients had a rough summer. In the past month, two of my elderly patients have fallen and broken their hips. Approximately 30% of community-dwelling people aged 65 years and older experience at least one fall per year, and falls are the leading cause of home injury deaths among adults aged 80 years or older.
Although I have been diligent about assessing bone mineral densities and addressing gait abnormalities, these two accidents happened. While reading through the emergency department and surgical notes to monitor my patients’ progress, I felt some despair and resorted to musings about what else could have been done to prevent these outcomes.
The medical literature suggests that multimodality approaches to the assessment and reduction of risk factors can reduce the rates of falls. Home visits provide a unique "inside" view of potential risk factors for falls that can be addressed prior to a patient breaking a hip. Should I have gone to their homes to prevent this?
Tobias Luck of the University of Leipzig and his colleagues conducted a multicenter, randomized clinical trial assessing the efficacy of a home visit intervention in a sample of community-dwelling individuals aged 80 years and older in Germany. Individuals who met the age criteria, were living at home, and had functional impairment of three or more activities of daily living were eligible for enrollment. All participants received baseline interviews in their homes (Clin. Interv. Aging 2013;8:697-702).
Participants in the intervention group underwent an analysis by multidisciplinary teams, individualized interventions were developed, and home counseling was conducted. A booster session was provided to the intervention group. The primary outcome was the incidence of institutionalization over the study period of 18 months.
Analyses were based upon 230 participants who remained in the study (112 control patients, 118 intervention patients). A significant decrease in the number of falls from baseline to follow-up was observed in the intervention group (incidence rate ratio, 0.63) and a significant increase was observed in the control group (incidence rate ratio, 1.96).
Data suggest that multifactorial interventions to prevent falls work. The challenge for us is to see how we can channel the resources dedicated to office practice toward conducting home visits. In a capitated environment, this may become increasingly easy to justify.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author.
A simple intervention for weight control
For weight-loss or weight-gain prevention, we know of no easy fix nor magic bullet. What we have is a multitude of interventions, tools, tips, and tricks that "sort of" work and, when used alone or in combination, may help our patients combat their weight challenges. These interventions vary in cost to the individual and the health care system. At one end, we have bariatric surgery, at the other ... water.
One proposed factor contributing to the world’s obesity epidemic is the increasing preference for sugar-sweetened beverages. Data would suggest that we are not saved by low-calorie nor no-calorie sweeteners, which may "prime" consumers for sweetness – leading to increased caloric consumption with the next meal.
Drinking water in lieu of sugar-sweetened beverages has been shown to reduce total energy intake, increase the feeling of fullness, reduce the perception of hunger, and increase energy expenditure.
Dr. Rebecca Muckelbauer of the Berlin School of Public Health and her colleagues conducted a systematic review evaluating the association between water consumption and weight (Am. J. Clin. Nutr. 2013;98:282-99). They included all types of published studies describing associations between water consumption and body weight among adults at least 18 years of age. The primary outcome of interest was any difference in body weight outcome based on amount of water consumption.
Data suggested that among participants engaged in a program of dietary modification for weight loss or weight maintenance, increased water consumption reduced body weight after 3-12 months, compared with following the program alone. This effect was identified in a randomized trial, a nonrandomized trial, and an observational study.
Increasing water consumption is an inexpensive intervention that the vast majority of our patients with weight concerns can try. Two studies evaluated the impact of consuming 0.5 L (16 ounces) of water before each of the three daily meals. Premeal water may reduce caloric consumption during the meal because of earlier satiety. This may be the simplest way to instruct our patients on how this could work.
The effect size of a "water intervention," especially if given as a single intervention, is anticipated to be small. As a result, patients might easily fail to adhere to it. Ideally, it would be marketed to our patients as part of a comprehensive weight loss strategy.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts.
For weight-loss or weight-gain prevention, we know of no easy fix nor magic bullet. What we have is a multitude of interventions, tools, tips, and tricks that "sort of" work and, when used alone or in combination, may help our patients combat their weight challenges. These interventions vary in cost to the individual and the health care system. At one end, we have bariatric surgery, at the other ... water.
One proposed factor contributing to the world’s obesity epidemic is the increasing preference for sugar-sweetened beverages. Data would suggest that we are not saved by low-calorie nor no-calorie sweeteners, which may "prime" consumers for sweetness – leading to increased caloric consumption with the next meal.
Drinking water in lieu of sugar-sweetened beverages has been shown to reduce total energy intake, increase the feeling of fullness, reduce the perception of hunger, and increase energy expenditure.
Dr. Rebecca Muckelbauer of the Berlin School of Public Health and her colleagues conducted a systematic review evaluating the association between water consumption and weight (Am. J. Clin. Nutr. 2013;98:282-99). They included all types of published studies describing associations between water consumption and body weight among adults at least 18 years of age. The primary outcome of interest was any difference in body weight outcome based on amount of water consumption.
Data suggested that among participants engaged in a program of dietary modification for weight loss or weight maintenance, increased water consumption reduced body weight after 3-12 months, compared with following the program alone. This effect was identified in a randomized trial, a nonrandomized trial, and an observational study.
Increasing water consumption is an inexpensive intervention that the vast majority of our patients with weight concerns can try. Two studies evaluated the impact of consuming 0.5 L (16 ounces) of water before each of the three daily meals. Premeal water may reduce caloric consumption during the meal because of earlier satiety. This may be the simplest way to instruct our patients on how this could work.
The effect size of a "water intervention," especially if given as a single intervention, is anticipated to be small. As a result, patients might easily fail to adhere to it. Ideally, it would be marketed to our patients as part of a comprehensive weight loss strategy.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts.
For weight-loss or weight-gain prevention, we know of no easy fix nor magic bullet. What we have is a multitude of interventions, tools, tips, and tricks that "sort of" work and, when used alone or in combination, may help our patients combat their weight challenges. These interventions vary in cost to the individual and the health care system. At one end, we have bariatric surgery, at the other ... water.
One proposed factor contributing to the world’s obesity epidemic is the increasing preference for sugar-sweetened beverages. Data would suggest that we are not saved by low-calorie nor no-calorie sweeteners, which may "prime" consumers for sweetness – leading to increased caloric consumption with the next meal.
Drinking water in lieu of sugar-sweetened beverages has been shown to reduce total energy intake, increase the feeling of fullness, reduce the perception of hunger, and increase energy expenditure.
Dr. Rebecca Muckelbauer of the Berlin School of Public Health and her colleagues conducted a systematic review evaluating the association between water consumption and weight (Am. J. Clin. Nutr. 2013;98:282-99). They included all types of published studies describing associations between water consumption and body weight among adults at least 18 years of age. The primary outcome of interest was any difference in body weight outcome based on amount of water consumption.
Data suggested that among participants engaged in a program of dietary modification for weight loss or weight maintenance, increased water consumption reduced body weight after 3-12 months, compared with following the program alone. This effect was identified in a randomized trial, a nonrandomized trial, and an observational study.
Increasing water consumption is an inexpensive intervention that the vast majority of our patients with weight concerns can try. Two studies evaluated the impact of consuming 0.5 L (16 ounces) of water before each of the three daily meals. Premeal water may reduce caloric consumption during the meal because of earlier satiety. This may be the simplest way to instruct our patients on how this could work.
The effect size of a "water intervention," especially if given as a single intervention, is anticipated to be small. As a result, patients might easily fail to adhere to it. Ideally, it would be marketed to our patients as part of a comprehensive weight loss strategy.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts.
Mindful eating
If you are eating while reading this article, stop. First, read the article. Then proceed to eat focusing on nothing other than your eating and your hunger. This would be called "mindful eating."
Mindful eating has roots in Buddhist teachings and may hold one of the keys to dealing with the obesity epidemic. But mindful eating is not a diet, and it is not about giving up food. It is about limiting distractions while eating and experiencing food more completely rather than expeditiously wolfing it down between patients, for example. Several studies have shown that caloric intake increases when people are distracted while eating (for example, reading, driving, and socializing).
The opposite of mindful eating is, of course, "mindless eating." Investigators at the University of Surrey in the United Kingdom evaluated the impact of different forms of distraction on eating behavior (Appetite 2013;62:119-26). A total of 81 participants were randomly allocated to one of four settings: driving, television viewing, social interaction, or being alone. The driving component was completed in a driving simulator, the television component was completed while they watched an episode of "Friends," and the social component was completed talking to one of the investigators about various topics.
In these settings, the participants received a British potato snack food called Hula Hoops and were asked to "taste test" them to justify food consumption. Measures of the desire to eat, such as hunger, fullness, and motivation to eat, were assessed before and after the intervention.
The investigators observed that individuals watching television consumed more food mass than did those in the social or driving conditions. For individuals eating alone, food consumption was associated with a reduced desire to eat. Watching television was associated with a decreased desire to eat, whereas social eating resulted in increases in the desire to eat. Interestingly, food intake was unrelated to baseline levels of hunger, fullness, or motivations to eat.
The authors suggest that distraction can focus attention away from both hunger and the process of eating. Distraction may trigger the onset of eating – but perhaps more importantly, it also may distract from the consequences of this eating, such as decreased hunger and increased fullness.
For our patients struggling with obesity, we can encourage them to focus on these consequences by limiting distractions while eating.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author.
If you are eating while reading this article, stop. First, read the article. Then proceed to eat focusing on nothing other than your eating and your hunger. This would be called "mindful eating."
Mindful eating has roots in Buddhist teachings and may hold one of the keys to dealing with the obesity epidemic. But mindful eating is not a diet, and it is not about giving up food. It is about limiting distractions while eating and experiencing food more completely rather than expeditiously wolfing it down between patients, for example. Several studies have shown that caloric intake increases when people are distracted while eating (for example, reading, driving, and socializing).
The opposite of mindful eating is, of course, "mindless eating." Investigators at the University of Surrey in the United Kingdom evaluated the impact of different forms of distraction on eating behavior (Appetite 2013;62:119-26). A total of 81 participants were randomly allocated to one of four settings: driving, television viewing, social interaction, or being alone. The driving component was completed in a driving simulator, the television component was completed while they watched an episode of "Friends," and the social component was completed talking to one of the investigators about various topics.
In these settings, the participants received a British potato snack food called Hula Hoops and were asked to "taste test" them to justify food consumption. Measures of the desire to eat, such as hunger, fullness, and motivation to eat, were assessed before and after the intervention.
The investigators observed that individuals watching television consumed more food mass than did those in the social or driving conditions. For individuals eating alone, food consumption was associated with a reduced desire to eat. Watching television was associated with a decreased desire to eat, whereas social eating resulted in increases in the desire to eat. Interestingly, food intake was unrelated to baseline levels of hunger, fullness, or motivations to eat.
The authors suggest that distraction can focus attention away from both hunger and the process of eating. Distraction may trigger the onset of eating – but perhaps more importantly, it also may distract from the consequences of this eating, such as decreased hunger and increased fullness.
For our patients struggling with obesity, we can encourage them to focus on these consequences by limiting distractions while eating.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author.
If you are eating while reading this article, stop. First, read the article. Then proceed to eat focusing on nothing other than your eating and your hunger. This would be called "mindful eating."
Mindful eating has roots in Buddhist teachings and may hold one of the keys to dealing with the obesity epidemic. But mindful eating is not a diet, and it is not about giving up food. It is about limiting distractions while eating and experiencing food more completely rather than expeditiously wolfing it down between patients, for example. Several studies have shown that caloric intake increases when people are distracted while eating (for example, reading, driving, and socializing).
The opposite of mindful eating is, of course, "mindless eating." Investigators at the University of Surrey in the United Kingdom evaluated the impact of different forms of distraction on eating behavior (Appetite 2013;62:119-26). A total of 81 participants were randomly allocated to one of four settings: driving, television viewing, social interaction, or being alone. The driving component was completed in a driving simulator, the television component was completed while they watched an episode of "Friends," and the social component was completed talking to one of the investigators about various topics.
In these settings, the participants received a British potato snack food called Hula Hoops and were asked to "taste test" them to justify food consumption. Measures of the desire to eat, such as hunger, fullness, and motivation to eat, were assessed before and after the intervention.
The investigators observed that individuals watching television consumed more food mass than did those in the social or driving conditions. For individuals eating alone, food consumption was associated with a reduced desire to eat. Watching television was associated with a decreased desire to eat, whereas social eating resulted in increases in the desire to eat. Interestingly, food intake was unrelated to baseline levels of hunger, fullness, or motivations to eat.
The authors suggest that distraction can focus attention away from both hunger and the process of eating. Distraction may trigger the onset of eating – but perhaps more importantly, it also may distract from the consequences of this eating, such as decreased hunger and increased fullness.
For our patients struggling with obesity, we can encourage them to focus on these consequences by limiting distractions while eating.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author.
SSRIs after stroke
Sixteen million people worldwide are affected by stroke annually, and 60 million individuals are stroke survivors. The financial and personal costs of stroke are staggering. Mitigating the sequelae of stroke frequently requires both resources and clinical acumen. One of these sequelae is depression.
The estimated prevalence of depression at any time after stroke is 29%. Predictors of depression include cognitive impairment, stroke severity, prestroke depression, and anxiety. Depression remission is associated with improved functional outcome at 3 months and 6 months, compared with continuing depression. One of the interventions suggested for depression after stroke is the use of selective serotonin reuptake inhibitors.
Investigators conducted a systematic review evaluating the efficacy of SSRIs on clinical outcomes after stroke (Stroke 2013;44:844-50). Fifty-two studies randomizing 4,059 patients to SSRI or a control were included in the final meta-analysis.
SSRIs were significantly associated with less dependency, disability, neurologic impairment, depression, and anxiety. The salutary effects of SSRIs on disability, depression, and neurologic deficits were greater among participants who were depressed when they were randomized.
No increased risk of death, seizures, GI side effects, or bleeding was observed with the use of SSRIs.
Interestingly and importantly, depression was not one of the inclusion criteria for 16 of the included trials. SSRIs may have neurogenic and neuroprotective effects, and animal data suggest that fluoxetine and sertraline facilitate recovery after cortical ischemia.
This evidence poses the reasonable question of whether SSRIs could or should be started in poststroke patients regardless of depressive symptoms.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. Reply via e-mail at [email protected].
Sixteen million people worldwide are affected by stroke annually, and 60 million individuals are stroke survivors. The financial and personal costs of stroke are staggering. Mitigating the sequelae of stroke frequently requires both resources and clinical acumen. One of these sequelae is depression.
The estimated prevalence of depression at any time after stroke is 29%. Predictors of depression include cognitive impairment, stroke severity, prestroke depression, and anxiety. Depression remission is associated with improved functional outcome at 3 months and 6 months, compared with continuing depression. One of the interventions suggested for depression after stroke is the use of selective serotonin reuptake inhibitors.
Investigators conducted a systematic review evaluating the efficacy of SSRIs on clinical outcomes after stroke (Stroke 2013;44:844-50). Fifty-two studies randomizing 4,059 patients to SSRI or a control were included in the final meta-analysis.
SSRIs were significantly associated with less dependency, disability, neurologic impairment, depression, and anxiety. The salutary effects of SSRIs on disability, depression, and neurologic deficits were greater among participants who were depressed when they were randomized.
No increased risk of death, seizures, GI side effects, or bleeding was observed with the use of SSRIs.
Interestingly and importantly, depression was not one of the inclusion criteria for 16 of the included trials. SSRIs may have neurogenic and neuroprotective effects, and animal data suggest that fluoxetine and sertraline facilitate recovery after cortical ischemia.
This evidence poses the reasonable question of whether SSRIs could or should be started in poststroke patients regardless of depressive symptoms.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. Reply via e-mail at [email protected].
Sixteen million people worldwide are affected by stroke annually, and 60 million individuals are stroke survivors. The financial and personal costs of stroke are staggering. Mitigating the sequelae of stroke frequently requires both resources and clinical acumen. One of these sequelae is depression.
The estimated prevalence of depression at any time after stroke is 29%. Predictors of depression include cognitive impairment, stroke severity, prestroke depression, and anxiety. Depression remission is associated with improved functional outcome at 3 months and 6 months, compared with continuing depression. One of the interventions suggested for depression after stroke is the use of selective serotonin reuptake inhibitors.
Investigators conducted a systematic review evaluating the efficacy of SSRIs on clinical outcomes after stroke (Stroke 2013;44:844-50). Fifty-two studies randomizing 4,059 patients to SSRI or a control were included in the final meta-analysis.
SSRIs were significantly associated with less dependency, disability, neurologic impairment, depression, and anxiety. The salutary effects of SSRIs on disability, depression, and neurologic deficits were greater among participants who were depressed when they were randomized.
No increased risk of death, seizures, GI side effects, or bleeding was observed with the use of SSRIs.
Interestingly and importantly, depression was not one of the inclusion criteria for 16 of the included trials. SSRIs may have neurogenic and neuroprotective effects, and animal data suggest that fluoxetine and sertraline facilitate recovery after cortical ischemia.
This evidence poses the reasonable question of whether SSRIs could or should be started in poststroke patients regardless of depressive symptoms.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. Reply via e-mail at [email protected].
Risk stratifying asymptomatic microscopic hematuria
Asymptomatic microscopic hematuria is a clinical challenge fraught with uncertainty, risk, and expense. With an estimated hematuria prevalence of 9%-18% and a threshold of at least 3 red blood cells per high-power field (RBC/HPF) as a cutoff for evaluation, we are all dealing with this problem – a lot. CT urograms, urine cytologies, and cystoscopies commonly compose the evaluation algorithm for this relatively low-prevalence disease.
What we need is a reliable and predictable way to place patients into different risk categories.
Dr. Ronald K. Loo of the Southern California Permanente Medical Group, Los Angeles, and his colleagues tried to answer this need by evaluating the performance of the Hematuria Risk Index (Mayo Clin. Proc. 2013;88:129-38).
The investigators assembled a prospective cohort of patients who were in the Kaiser Permanente system and had been referred to a urologist to undergo a full evaluation for asymptomatic microscopic hematuria. They derived the risk index from a "test cohort" composed of 2,630 patients, among whom 2.1% had a cancer detected and 1.9% had a pathologically confirmed urinary tract cancer.
The Hematuria Risk Index they developed is scored as follows: 4 points for gross hematuria and/or age at least 50 years and 1 point for a history of smoking, male gender, and/or greater than 25 RBC/HPF on recent urinalysis. The range is from 0 to 11 points, with patients stratified as low risk (0-4 points), moderate risk (5-8 points), or high risk (9-11 points).
Applying this risk index to a validation cohort, cancer was detected in 10.7% of the high-risk patients, 2.5% of the moderate-risk patients, and 0% of the low-risk patients.
Importantly, Dr. Loo and his associates concluded that microscopic hematuria is an unreliable indicator of urothelial malignancy. They further concluded that the risk of identifying a urinary tract cancer in anyone younger than 50 years without a history of gross hematuria is close to zero. Non-neoplastic findings included urinary stones, prostatic bleeding, urinary tract infection, and glomerular disease
This is a fantastically helpful study. Now, getting this Hematuria Risk Index as an app on my smartphone will make my year.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author. Reply via e-mail at [email protected].
Asymptomatic microscopic hematuria is a clinical challenge fraught with uncertainty, risk, and expense. With an estimated hematuria prevalence of 9%-18% and a threshold of at least 3 red blood cells per high-power field (RBC/HPF) as a cutoff for evaluation, we are all dealing with this problem – a lot. CT urograms, urine cytologies, and cystoscopies commonly compose the evaluation algorithm for this relatively low-prevalence disease.
What we need is a reliable and predictable way to place patients into different risk categories.
Dr. Ronald K. Loo of the Southern California Permanente Medical Group, Los Angeles, and his colleagues tried to answer this need by evaluating the performance of the Hematuria Risk Index (Mayo Clin. Proc. 2013;88:129-38).
The investigators assembled a prospective cohort of patients who were in the Kaiser Permanente system and had been referred to a urologist to undergo a full evaluation for asymptomatic microscopic hematuria. They derived the risk index from a "test cohort" composed of 2,630 patients, among whom 2.1% had a cancer detected and 1.9% had a pathologically confirmed urinary tract cancer.
The Hematuria Risk Index they developed is scored as follows: 4 points for gross hematuria and/or age at least 50 years and 1 point for a history of smoking, male gender, and/or greater than 25 RBC/HPF on recent urinalysis. The range is from 0 to 11 points, with patients stratified as low risk (0-4 points), moderate risk (5-8 points), or high risk (9-11 points).
Applying this risk index to a validation cohort, cancer was detected in 10.7% of the high-risk patients, 2.5% of the moderate-risk patients, and 0% of the low-risk patients.
Importantly, Dr. Loo and his associates concluded that microscopic hematuria is an unreliable indicator of urothelial malignancy. They further concluded that the risk of identifying a urinary tract cancer in anyone younger than 50 years without a history of gross hematuria is close to zero. Non-neoplastic findings included urinary stones, prostatic bleeding, urinary tract infection, and glomerular disease
This is a fantastically helpful study. Now, getting this Hematuria Risk Index as an app on my smartphone will make my year.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author. Reply via e-mail at [email protected].
Asymptomatic microscopic hematuria is a clinical challenge fraught with uncertainty, risk, and expense. With an estimated hematuria prevalence of 9%-18% and a threshold of at least 3 red blood cells per high-power field (RBC/HPF) as a cutoff for evaluation, we are all dealing with this problem – a lot. CT urograms, urine cytologies, and cystoscopies commonly compose the evaluation algorithm for this relatively low-prevalence disease.
What we need is a reliable and predictable way to place patients into different risk categories.
Dr. Ronald K. Loo of the Southern California Permanente Medical Group, Los Angeles, and his colleagues tried to answer this need by evaluating the performance of the Hematuria Risk Index (Mayo Clin. Proc. 2013;88:129-38).
The investigators assembled a prospective cohort of patients who were in the Kaiser Permanente system and had been referred to a urologist to undergo a full evaluation for asymptomatic microscopic hematuria. They derived the risk index from a "test cohort" composed of 2,630 patients, among whom 2.1% had a cancer detected and 1.9% had a pathologically confirmed urinary tract cancer.
The Hematuria Risk Index they developed is scored as follows: 4 points for gross hematuria and/or age at least 50 years and 1 point for a history of smoking, male gender, and/or greater than 25 RBC/HPF on recent urinalysis. The range is from 0 to 11 points, with patients stratified as low risk (0-4 points), moderate risk (5-8 points), or high risk (9-11 points).
Applying this risk index to a validation cohort, cancer was detected in 10.7% of the high-risk patients, 2.5% of the moderate-risk patients, and 0% of the low-risk patients.
Importantly, Dr. Loo and his associates concluded that microscopic hematuria is an unreliable indicator of urothelial malignancy. They further concluded that the risk of identifying a urinary tract cancer in anyone younger than 50 years without a history of gross hematuria is close to zero. Non-neoplastic findings included urinary stones, prostatic bleeding, urinary tract infection, and glomerular disease
This is a fantastically helpful study. Now, getting this Hematuria Risk Index as an app on my smartphone will make my year.
Dr. Ebbert is professor of medicine and a primary care clinician at the Mayo Clinic in Rochester, Minn. He reported having no relevant financial conflicts. The opinions expressed are those of the author. Reply via e-mail at [email protected].