User login
Doripenem Knocks Out Tough Pediatric Infections
CHICAGO – Doripenem was a safe and effective treatment for pediatric patients with septicemia, pneumonia, and other infections in a phase-III trial of 95 children.
Two or three 20-mg/kg doses of the parenteral carbapenem antibiotic cured the infections of 92 of the children enrolled in the multicenter trial. The microbiological cure rate among the 75 subjects in whom at least one bacterial pathogen was isolated at baseline was 92%, reported Dr. Keisuke Sunakawa of Kitasato University, Tokyo.
Doripenem (Doribax) is approved in the United States for the treatment of complicated intra-abdominal infections and complicated urinary tract infections in adults. It is also approved for nosocomial pneumonia in Europe and for a variety of bacterial infections, including septicemia and pneumonia, in Japan, Dr. Sunakawa said at the annual Interscience Conference on Antimicrobial Agents and Chemotherapy.
Because of the drug’s antibacterial activity against known causative pathogens in pediatric infections, including penicillin-resistant Streptococcus pneumoniae (PRSP) and beta-lactamase–nonproducing, ampicillin-resistant Haemophilus influenzae (BLNAR), the investigators sought to determine its safety and efficacy in children. Toward this end, they enrolled 100 children between the ages of 2 months and 13 years with pneumonia, urinary tract infections, middle otitis, septicemia, and other infections.
Because doripenem does not have specific pediatric dosing recommendation, the investigators performed a Monte Carlo simulation based on a pharmacokinetic-pharmacodynamic model, which suggested that 20 mg/kg given two or three times daily would provide optimal coverage, Dr. Sunakawa said in a poster presentation. Patients received infusions based on this dosing regimen for a maximum of 14 days and were followed up 7 days after the last dose. The primary end point was the clinical cure rate at the end of therapy.
Of the 95 children (50 boys and 45 girls) included in the final analysis, 58 had pneumonia, 12 had urinary tract infections, 8 had otitis media, and 17 had septicemia or other infections, with mean treatment durations of 6.4, 6.4, 6.5, and 6.1 days, respectively.
The clinical response rates by infection type at the end of therapy were 97% for pneumonia, 100% for urinary tract infections, 100% for middle otitis, and 94% for septicemia and other infections, he said.
The one treatment "failure" was a patient in the latter group with perimandibular phlegmon who stopped treatment because she underwent additional endodontic surgery at another hospital. The recurrence/reinfection rate at follow-up was 1.4%, Dr. Sunakawa at the meeting, which was sponsored by the American Society for Microbiology.
When assessed by microbiological response, 6 of the 75 patients in whom at least one baseline pathogen had been detected failed treatment, including one patient with gram-negative H. influenzae and five with polymicrobial (S. pneumoniae and H. influenzae) infection, he reported.
"Of the drug-resistant genes [identified by PCR], all but three were cured with doripenem," he said, including one PRSP infection, one BLNAR infection, and one beta-lactamase–producing, amoxicillin clavulanate–resistant H. influenzae infection.
Regarding safety, "the adverse events were consistent with those observed at the 250- to 1,000-mg dose in adults," Dr. Sunakawa stated. "There were no new safety signals, and there were no central nervous system events, such as seizures, which are a concern for this drug class."
At least one adverse event was reported in 2 of the patients who received twice-daily doses (40%) and 54 of those who had thrice-daily doses (57%). The most common adverse events were diarrhea or loose stool, followed by injection site reactions and increased platelet count, ALT, AST, and eosinophil count, Dr. Sunakawa said. No events observed in the twice-daily dosing group and 27 in the thrice-daily dosing group were considered to be drug related, he said.
The results suggest that treatment with doripenem may be a good option for severe, intractable pediatric infections, particularly in light of the increasing frequency of drug-resistant pathogens, Dr. Sunakawa concluded. The drug is not yet indicated for use in children in the United States, Europe, or Japan.
Dr. Sunakawa disclosed serving as a scientific advisor for Shionogi & Co.
CHICAGO – Doripenem was a safe and effective treatment for pediatric patients with septicemia, pneumonia, and other infections in a phase-III trial of 95 children.
Two or three 20-mg/kg doses of the parenteral carbapenem antibiotic cured the infections of 92 of the children enrolled in the multicenter trial. The microbiological cure rate among the 75 subjects in whom at least one bacterial pathogen was isolated at baseline was 92%, reported Dr. Keisuke Sunakawa of Kitasato University, Tokyo.
Doripenem (Doribax) is approved in the United States for the treatment of complicated intra-abdominal infections and complicated urinary tract infections in adults. It is also approved for nosocomial pneumonia in Europe and for a variety of bacterial infections, including septicemia and pneumonia, in Japan, Dr. Sunakawa said at the annual Interscience Conference on Antimicrobial Agents and Chemotherapy.
Because of the drug’s antibacterial activity against known causative pathogens in pediatric infections, including penicillin-resistant Streptococcus pneumoniae (PRSP) and beta-lactamase–nonproducing, ampicillin-resistant Haemophilus influenzae (BLNAR), the investigators sought to determine its safety and efficacy in children. Toward this end, they enrolled 100 children between the ages of 2 months and 13 years with pneumonia, urinary tract infections, middle otitis, septicemia, and other infections.
Because doripenem does not have specific pediatric dosing recommendation, the investigators performed a Monte Carlo simulation based on a pharmacokinetic-pharmacodynamic model, which suggested that 20 mg/kg given two or three times daily would provide optimal coverage, Dr. Sunakawa said in a poster presentation. Patients received infusions based on this dosing regimen for a maximum of 14 days and were followed up 7 days after the last dose. The primary end point was the clinical cure rate at the end of therapy.
Of the 95 children (50 boys and 45 girls) included in the final analysis, 58 had pneumonia, 12 had urinary tract infections, 8 had otitis media, and 17 had septicemia or other infections, with mean treatment durations of 6.4, 6.4, 6.5, and 6.1 days, respectively.
The clinical response rates by infection type at the end of therapy were 97% for pneumonia, 100% for urinary tract infections, 100% for middle otitis, and 94% for septicemia and other infections, he said.
The one treatment "failure" was a patient in the latter group with perimandibular phlegmon who stopped treatment because she underwent additional endodontic surgery at another hospital. The recurrence/reinfection rate at follow-up was 1.4%, Dr. Sunakawa at the meeting, which was sponsored by the American Society for Microbiology.
When assessed by microbiological response, 6 of the 75 patients in whom at least one baseline pathogen had been detected failed treatment, including one patient with gram-negative H. influenzae and five with polymicrobial (S. pneumoniae and H. influenzae) infection, he reported.
"Of the drug-resistant genes [identified by PCR], all but three were cured with doripenem," he said, including one PRSP infection, one BLNAR infection, and one beta-lactamase–producing, amoxicillin clavulanate–resistant H. influenzae infection.
Regarding safety, "the adverse events were consistent with those observed at the 250- to 1,000-mg dose in adults," Dr. Sunakawa stated. "There were no new safety signals, and there were no central nervous system events, such as seizures, which are a concern for this drug class."
At least one adverse event was reported in 2 of the patients who received twice-daily doses (40%) and 54 of those who had thrice-daily doses (57%). The most common adverse events were diarrhea or loose stool, followed by injection site reactions and increased platelet count, ALT, AST, and eosinophil count, Dr. Sunakawa said. No events observed in the twice-daily dosing group and 27 in the thrice-daily dosing group were considered to be drug related, he said.
The results suggest that treatment with doripenem may be a good option for severe, intractable pediatric infections, particularly in light of the increasing frequency of drug-resistant pathogens, Dr. Sunakawa concluded. The drug is not yet indicated for use in children in the United States, Europe, or Japan.
Dr. Sunakawa disclosed serving as a scientific advisor for Shionogi & Co.
CHICAGO – Doripenem was a safe and effective treatment for pediatric patients with septicemia, pneumonia, and other infections in a phase-III trial of 95 children.
Two or three 20-mg/kg doses of the parenteral carbapenem antibiotic cured the infections of 92 of the children enrolled in the multicenter trial. The microbiological cure rate among the 75 subjects in whom at least one bacterial pathogen was isolated at baseline was 92%, reported Dr. Keisuke Sunakawa of Kitasato University, Tokyo.
Doripenem (Doribax) is approved in the United States for the treatment of complicated intra-abdominal infections and complicated urinary tract infections in adults. It is also approved for nosocomial pneumonia in Europe and for a variety of bacterial infections, including septicemia and pneumonia, in Japan, Dr. Sunakawa said at the annual Interscience Conference on Antimicrobial Agents and Chemotherapy.
Because of the drug’s antibacterial activity against known causative pathogens in pediatric infections, including penicillin-resistant Streptococcus pneumoniae (PRSP) and beta-lactamase–nonproducing, ampicillin-resistant Haemophilus influenzae (BLNAR), the investigators sought to determine its safety and efficacy in children. Toward this end, they enrolled 100 children between the ages of 2 months and 13 years with pneumonia, urinary tract infections, middle otitis, septicemia, and other infections.
Because doripenem does not have specific pediatric dosing recommendation, the investigators performed a Monte Carlo simulation based on a pharmacokinetic-pharmacodynamic model, which suggested that 20 mg/kg given two or three times daily would provide optimal coverage, Dr. Sunakawa said in a poster presentation. Patients received infusions based on this dosing regimen for a maximum of 14 days and were followed up 7 days after the last dose. The primary end point was the clinical cure rate at the end of therapy.
Of the 95 children (50 boys and 45 girls) included in the final analysis, 58 had pneumonia, 12 had urinary tract infections, 8 had otitis media, and 17 had septicemia or other infections, with mean treatment durations of 6.4, 6.4, 6.5, and 6.1 days, respectively.
The clinical response rates by infection type at the end of therapy were 97% for pneumonia, 100% for urinary tract infections, 100% for middle otitis, and 94% for septicemia and other infections, he said.
The one treatment "failure" was a patient in the latter group with perimandibular phlegmon who stopped treatment because she underwent additional endodontic surgery at another hospital. The recurrence/reinfection rate at follow-up was 1.4%, Dr. Sunakawa at the meeting, which was sponsored by the American Society for Microbiology.
When assessed by microbiological response, 6 of the 75 patients in whom at least one baseline pathogen had been detected failed treatment, including one patient with gram-negative H. influenzae and five with polymicrobial (S. pneumoniae and H. influenzae) infection, he reported.
"Of the drug-resistant genes [identified by PCR], all but three were cured with doripenem," he said, including one PRSP infection, one BLNAR infection, and one beta-lactamase–producing, amoxicillin clavulanate–resistant H. influenzae infection.
Regarding safety, "the adverse events were consistent with those observed at the 250- to 1,000-mg dose in adults," Dr. Sunakawa stated. "There were no new safety signals, and there were no central nervous system events, such as seizures, which are a concern for this drug class."
At least one adverse event was reported in 2 of the patients who received twice-daily doses (40%) and 54 of those who had thrice-daily doses (57%). The most common adverse events were diarrhea or loose stool, followed by injection site reactions and increased platelet count, ALT, AST, and eosinophil count, Dr. Sunakawa said. No events observed in the twice-daily dosing group and 27 in the thrice-daily dosing group were considered to be drug related, he said.
The results suggest that treatment with doripenem may be a good option for severe, intractable pediatric infections, particularly in light of the increasing frequency of drug-resistant pathogens, Dr. Sunakawa concluded. The drug is not yet indicated for use in children in the United States, Europe, or Japan.
Dr. Sunakawa disclosed serving as a scientific advisor for Shionogi & Co.
FROM THE ANNUAL INTERSCIENCE CONFERENCE ON ANTIMICROBIAL AGENTS AND CHEMOTHERAPY
Major Finding: The microbiological cure rate for doripenem among the 75 subjects in whom at least one bacterial pathogen was isolated at baseline was 92%.
Data Source: A phase III open-label prospective clinical trial assessing the treatment efficacy of 20 mg doripenem given two to three times daily to 95 children with pneumonia, urinary tract infections, middle otitis, septicemia, and other infections.
Disclosures: Dr. Sunakawa disclosed serving as a scientific advisor for Shionogi & Co., LTD.
Shorter Telaprevir Regimen Yields Best HCV Outcomes
Extending peginterferon/ribavirin therapy from 24 to 48 weeks does not benefit most genotype 1 hepatitis C patients who achieve an extended rapid viral response to a telaprevir-based combination protocol, according to results from an open-label, randomized trial reported in the Sept. 15 issue of the New England Journal of Medicine.
The telaprevir-based regimen in the phase III Illuminate (Illustrating the Effects of Combination Therapy with Telaprevir) trial consisted of 750 mg of telaprevir every 8 hours, 180 mcg of peginterferon alfa-2a per week, and 1,000-1,200 mg of ribavirin per day for 12 weeks, followed by peginterferon/ribavirin. Those treatment-naive patients who had a rapid response to the telaprevir-based regimen were randomized, after week 20, to receive the dual therapy for either 4 more weeks or 28 more weeks. Rapid response was based on undetectable RNA levels of the hepatitis C virus (HCV) at weeks 4 and 12.
Patients who had not achieved an extended rapid viral response (eRVR) were assigned to the 48-week protocol, according to Dr. Kenneth Sherman of the University of Cincinnati College of Medicine and his colleagues.
In all, 540 patients were initially treated with the 12-week telaprevir regimen described above, and 352 patients achieved eRVR. Of these, 322 remained on treatment and 162 were randomized to the 24-week protocol and 160 were randomized to the 48-week treatment arm, they wrote (N. Engl. J. Med. 2011:365: 1014-24).
The overall rate of sustained virologic response (SVR) in the full study population was 72%, while the rates in the patients who achieved eRVR were 92% in the 24-week group and 88% in the 48-week group, the investigators reported.
Of the 118 patients who had not achieved eRVR in the initial trial and who were assigned to the 48-week protocol and the 100 patients who discontinued treatment before week 20, 76 (64%) and 23 (23%) of patients, respectively, achieved an SVR, they wrote.
More patients in the shorter-treatment group completed the treatment compared with the longer-treatment group (99% vs. 74%).
The relapse rate post treatment was 8% in the overall group, and it was 6% and 3%, respectively, in the 24- and 48-week groups. "The HCV RNA level was undetectable at 72 weeks after the start of treatment in 70% of patients overall," the authors reported, with an absolute difference of –0.5 percentage points between the 24-week (85%) and the 48-week (85.5%) groups.
There were more serious adverse events, most commonly anemia, in the longer-duration group than in the shorter, and there were more adverse event–related treatment discontinuations in the longer-duration group as well, the authors wrote. Specifically, 2% of patients in the 24-week group and 10% of those in the 48-week-group who had an eRVR experienced serious adverse events.
Eighteen percent of the patients overall, and 1% and 12% of the 24-week and 48-week groups, respectively, discontinued all of the study drugs because of adverse events, which included fatigue, pruritus, nausea, anemia, headache, rash, insomnia, diarrhea, and flulike illness, they reported.
Importantly, this response-guided treatment regimen "resulted in a shorter treatment duration with high rates of sustained virologic response for approximately two-thirds of treated patients," the authors stressed. Further, "the treatment regimen was highly effective in patients with a historically poor treatment response, including blacks, patients with bridging fibrosis or cirrhosis, and patients with high HCV RNA levels." The shorter treatment strategy appears to decrease the risk of exposure to potential side effects from telaprevir, an oral HCV protease inhibitor, as well as the adverse events associated with the 48-week peginterferon and ribavirin regimen.
The Illuminate study was supported by Vertex Pharmaceuticals and Tibotec. The authors disclosed relationships with numerous pharmaceutical companies, including Vertex, which manufactures telaprevir, and Tibotec, which was involved in the drug’s development.
Extending peginterferon/ribavirin therapy from 24 to 48 weeks does not benefit most genotype 1 hepatitis C patients who achieve an extended rapid viral response to a telaprevir-based combination protocol, according to results from an open-label, randomized trial reported in the Sept. 15 issue of the New England Journal of Medicine.
The telaprevir-based regimen in the phase III Illuminate (Illustrating the Effects of Combination Therapy with Telaprevir) trial consisted of 750 mg of telaprevir every 8 hours, 180 mcg of peginterferon alfa-2a per week, and 1,000-1,200 mg of ribavirin per day for 12 weeks, followed by peginterferon/ribavirin. Those treatment-naive patients who had a rapid response to the telaprevir-based regimen were randomized, after week 20, to receive the dual therapy for either 4 more weeks or 28 more weeks. Rapid response was based on undetectable RNA levels of the hepatitis C virus (HCV) at weeks 4 and 12.
Patients who had not achieved an extended rapid viral response (eRVR) were assigned to the 48-week protocol, according to Dr. Kenneth Sherman of the University of Cincinnati College of Medicine and his colleagues.
In all, 540 patients were initially treated with the 12-week telaprevir regimen described above, and 352 patients achieved eRVR. Of these, 322 remained on treatment and 162 were randomized to the 24-week protocol and 160 were randomized to the 48-week treatment arm, they wrote (N. Engl. J. Med. 2011:365: 1014-24).
The overall rate of sustained virologic response (SVR) in the full study population was 72%, while the rates in the patients who achieved eRVR were 92% in the 24-week group and 88% in the 48-week group, the investigators reported.
Of the 118 patients who had not achieved eRVR in the initial trial and who were assigned to the 48-week protocol and the 100 patients who discontinued treatment before week 20, 76 (64%) and 23 (23%) of patients, respectively, achieved an SVR, they wrote.
More patients in the shorter-treatment group completed the treatment compared with the longer-treatment group (99% vs. 74%).
The relapse rate post treatment was 8% in the overall group, and it was 6% and 3%, respectively, in the 24- and 48-week groups. "The HCV RNA level was undetectable at 72 weeks after the start of treatment in 70% of patients overall," the authors reported, with an absolute difference of –0.5 percentage points between the 24-week (85%) and the 48-week (85.5%) groups.
There were more serious adverse events, most commonly anemia, in the longer-duration group than in the shorter, and there were more adverse event–related treatment discontinuations in the longer-duration group as well, the authors wrote. Specifically, 2% of patients in the 24-week group and 10% of those in the 48-week-group who had an eRVR experienced serious adverse events.
Eighteen percent of the patients overall, and 1% and 12% of the 24-week and 48-week groups, respectively, discontinued all of the study drugs because of adverse events, which included fatigue, pruritus, nausea, anemia, headache, rash, insomnia, diarrhea, and flulike illness, they reported.
Importantly, this response-guided treatment regimen "resulted in a shorter treatment duration with high rates of sustained virologic response for approximately two-thirds of treated patients," the authors stressed. Further, "the treatment regimen was highly effective in patients with a historically poor treatment response, including blacks, patients with bridging fibrosis or cirrhosis, and patients with high HCV RNA levels." The shorter treatment strategy appears to decrease the risk of exposure to potential side effects from telaprevir, an oral HCV protease inhibitor, as well as the adverse events associated with the 48-week peginterferon and ribavirin regimen.
The Illuminate study was supported by Vertex Pharmaceuticals and Tibotec. The authors disclosed relationships with numerous pharmaceutical companies, including Vertex, which manufactures telaprevir, and Tibotec, which was involved in the drug’s development.
Extending peginterferon/ribavirin therapy from 24 to 48 weeks does not benefit most genotype 1 hepatitis C patients who achieve an extended rapid viral response to a telaprevir-based combination protocol, according to results from an open-label, randomized trial reported in the Sept. 15 issue of the New England Journal of Medicine.
The telaprevir-based regimen in the phase III Illuminate (Illustrating the Effects of Combination Therapy with Telaprevir) trial consisted of 750 mg of telaprevir every 8 hours, 180 mcg of peginterferon alfa-2a per week, and 1,000-1,200 mg of ribavirin per day for 12 weeks, followed by peginterferon/ribavirin. Those treatment-naive patients who had a rapid response to the telaprevir-based regimen were randomized, after week 20, to receive the dual therapy for either 4 more weeks or 28 more weeks. Rapid response was based on undetectable RNA levels of the hepatitis C virus (HCV) at weeks 4 and 12.
Patients who had not achieved an extended rapid viral response (eRVR) were assigned to the 48-week protocol, according to Dr. Kenneth Sherman of the University of Cincinnati College of Medicine and his colleagues.
In all, 540 patients were initially treated with the 12-week telaprevir regimen described above, and 352 patients achieved eRVR. Of these, 322 remained on treatment and 162 were randomized to the 24-week protocol and 160 were randomized to the 48-week treatment arm, they wrote (N. Engl. J. Med. 2011:365: 1014-24).
The overall rate of sustained virologic response (SVR) in the full study population was 72%, while the rates in the patients who achieved eRVR were 92% in the 24-week group and 88% in the 48-week group, the investigators reported.
Of the 118 patients who had not achieved eRVR in the initial trial and who were assigned to the 48-week protocol and the 100 patients who discontinued treatment before week 20, 76 (64%) and 23 (23%) of patients, respectively, achieved an SVR, they wrote.
More patients in the shorter-treatment group completed the treatment compared with the longer-treatment group (99% vs. 74%).
The relapse rate post treatment was 8% in the overall group, and it was 6% and 3%, respectively, in the 24- and 48-week groups. "The HCV RNA level was undetectable at 72 weeks after the start of treatment in 70% of patients overall," the authors reported, with an absolute difference of –0.5 percentage points between the 24-week (85%) and the 48-week (85.5%) groups.
There were more serious adverse events, most commonly anemia, in the longer-duration group than in the shorter, and there were more adverse event–related treatment discontinuations in the longer-duration group as well, the authors wrote. Specifically, 2% of patients in the 24-week group and 10% of those in the 48-week-group who had an eRVR experienced serious adverse events.
Eighteen percent of the patients overall, and 1% and 12% of the 24-week and 48-week groups, respectively, discontinued all of the study drugs because of adverse events, which included fatigue, pruritus, nausea, anemia, headache, rash, insomnia, diarrhea, and flulike illness, they reported.
Importantly, this response-guided treatment regimen "resulted in a shorter treatment duration with high rates of sustained virologic response for approximately two-thirds of treated patients," the authors stressed. Further, "the treatment regimen was highly effective in patients with a historically poor treatment response, including blacks, patients with bridging fibrosis or cirrhosis, and patients with high HCV RNA levels." The shorter treatment strategy appears to decrease the risk of exposure to potential side effects from telaprevir, an oral HCV protease inhibitor, as well as the adverse events associated with the 48-week peginterferon and ribavirin regimen.
The Illuminate study was supported by Vertex Pharmaceuticals and Tibotec. The authors disclosed relationships with numerous pharmaceutical companies, including Vertex, which manufactures telaprevir, and Tibotec, which was involved in the drug’s development.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Major Finding: Twenty-four weeks of response-guided telaprevir-based HCV treatment is as effective as 48 weeks with fewer side effects in previously untreated patients. Among patients who had achieved an extended rapid virologic response, sustained virologic responses were achieved in 92% of those on 24 weeks of therapy versus 88% of those on 48 weeks of therapy.
Data Source: Multinational, randomized, noninferiority trial comparing sustained virologic response rates at 24 and 48 weeks in 322 previously untreated HCV genotype I patients who initially showed good virologic response to telaprevir treatment plus peginterferon/ribavirin at weeks 4 and 12.
Disclosures: The Illuminate study was supported by Vertex Pharmaceuticals and Tibotec. The authors disclosed relationships with numerous pharmaceutical companies, including Vertex, which manufactures telaprevir, and Tibotec, which was involved in the drug’s development.
It's All in the Translation
Translational medicine in rheumatology has benefitted from unsung but successful bridge-building efforts that facilitate mutually beneficial research relationships between basic scientists and clinicians working toward a common goal, and lately the fruits of such efforts have led to ground-breaking drug discoveries and therapeutic advances.
For rheumatology in particular, because the discipline encompasses multiple organ systems and diverse pathology, such translational research is critical for gaining an understanding of the complexities of the immune system and disease mechanisms and developing and testing treatment strategies, according to Dr. Iain McInnes, professor of experimental medicine and rheumatology at the University of Glasgow (Scotland). "It’s a simple concept, really. It’s the idea that we might understand more about the basic science pathways if we look at them operating in the real world. And it’s a two-way street. Basic science can in turn be informed by the clinical pathologic entity that people come to study."
Examples of "good translation stories," said Dr. McInnes, a clinician/scientist himself, include the tumor necrosis factor (TNF) blocking story and the interleukin-1 (IL-1) story. The development of the TNF blocking therapies, which have had a major impact on the treatment of rheumatoid arthritis and other inflammatory diseases, followed "a series of basic science and clinical medicine iterations, initially in the infectious diseases world and eventually in immunology and rheumatology," he said.
"The evolution of the IL-1 biology is an especially interesting translational story," Dr. McInnes acknowledged. Since it was first cloned in the 1980s, the cytokine family (IL-1a and IL-1b) was identified as a key player in regulating inflammatory processes. This led to the development of IL-1 inhibitors, which were tested primarily in rheumatoid arthritis, but with only modest success, he said. "But this helped investigators understand how IL-1 was synthesized and how its biology was regulated." Such insights eventually led to the development and testing of IL-1 directed agents such as anakinra and rilonacept in patients with hereditary autoinflammatory conditions and in nonhereditary inflammatory diseases associated with aberrant IL-1 signaling, including Mediterranean fever, Muckle-Wells syndrome, neonatal onset multisystem inflammatory disease, and gout. In turn, the efficacy of IL-1 blockade in the treatment of many of these conditions "has changed the understanding of these disorders," he said, and has led to investigation IL-1’s role in other diseases, including adult-onset Still’s disease and systemic juvenile idiopathic arthritis.
The IL-1 story is "elegant science" with respect to the back and forth between the lap and clinical correlates, Dr. McInnes explained. "Although it ultimately did not lead to a good rheumatoid arthritis treatment, the continual cross talk allowed relevance to be maintained and clinical pathology to direct the lab focus over time."
Despite the successes, the bench-to-bedside lag is real and sometimes inevitable given the actual and perceived separation between basic science and clinical research. One of the fundamental reasons for the schism, according to Dr. McInnes, is that scientists and clinicians often don’t share the same philosophy. "We have to be careful about generalizations but, from an academic point of view, a basic scientist is driven by trying to answer a legitimate scientific question: Why is the sky blue? Why are daffodils yellow? How does this chemical activate this target cell?" he said. "So in our area of autoimmune research, a basic scientist’s approach to cytokine research would be, ‘I’m just going to keep chasing down this molecule and its biological effects until I get the answer.’ "
The clinical investigator, on the other hand, "is trying to understand the complex matrix of events that lead to the pathology or disease state, so he or she may well rely on a basic scientific pathway for a certain period of time, but if it turns out that the pathway is really not involved in the disease pathogenesis after a period of investigation, the clinician will part company with that scientist." So while the scientist presses ahead trying to answer the basic scientific question, the clinician investigators will be driven by where the disease takes them, he said.
The separation between basic science and clinical investigation is not deliberate as much as it is circumstantial, according to rheumatologist Dr. Bruce Cronstein, director of the Clinical and Translational Science Institute at New York University. "Many bench researchers are not aware of whom to contact among their clinical colleagues to best obtain relevant collaboration, and clinicians are also unaware of whom to contact among their bench research colleagues," he said. Further, in the United States, "rewards for collaboration have not aligned well with the clinic requirements. Increasing pressure to maximize income [both personal and institutional] by seeing more patients has diminished the time for clinicians to collaborate by gathering extensive data about patients or biosamples," he said.
Also, the mechanisms for sharing credit are poorly defined and there are few mechanisms available for scientists to share grants and funding with their clinical colleagues, according to Dr. Cronstein. "Does a clinician become an author of a high-profile science paper for supplying one patient’s samples? Two? Twenty?" And although the National Institutes of Health has recently defined the coinvestigator mechanism, "even that tends to slight the noncorresponding author." Although the importance of bringing clinical and scientific colleagues together is generally understood to be critical, "finding the appropriate venues to do so is very difficult to accomplish in practice," he said.
In an ideal world, the science/clinical collaboration would be more fluid than fixed, whereby the clinical investigator studying a pathologic state "would interact with several different scientists who offer their appropriate expertise to tease out different components and different pathways," said Dr. McInnes. "One of the difficulties is that we often don’t know where the starting point is in the pathological journey. We are informed by the circumstances of where we are looking." In rheumatoid arthritis, for example, "a lot of our thinking [behind recent developments] was provoked by what the joint looks like after 5-10 years of disease. Maybe the next advance will be provoked by what the joint looks like after just a couple of days of arthritis, if we could get to that window," he said. Out of circumstance, then, the divide between basic and clinical research is process driven because there are many different processes at work at different phases even in one disease."
Each discipline also has its own distinct obstacles that can preclude fluid cooperation. "At the moment, the challenges of successfully translating a bit of science into an understanding of pathogenesis are gargantuan. There’s the ethics of looking at human tissue, the willingness of patient populations to participate, and just the pragmatics of whereabouts in the patient you can look," said Dr. McInnes. "If the disease is a disease of the lymph node or bone marrow, before it becomes a disease of the joint, it’s not impossible but it is quite difficult to get bits of lymph node and bone marrow from human beings, although that’s some of the real interesting science that underpins what the disease may be." Unlike cancer, in which biopsies are part of routine care and as such can more readily be used for scientific investigation, "it’s not routine to biopsy joint tissue or synovial lining in rheumatoid arthritis patients, even if it’s something you would like to do, and it’s absolutely not routine practice to biopsy lymph nodes or bone marrow. The ethics are marginal."
Finally, there are simply not enough properly trained clinician scientists, in medicine in general and rheumatology specifically, Dr. McInnes said. "When they do exist, they are often hard-pressed to meet the demands of both the scientific and clinical communities." Efforts to chip away at this particular barrier are underway, thanks to innovative collaborations, within and between universities and medical schools, often supported by government and/or pharmaceutical company funding. The university-wide Clinical and Translational Science Institute at NYU, for example, in partnership with the New York City Health and Hospitals Corporation and with funding support from the NIH, is one of more than 45 such centers nationwide charged with the task of training clinician scientists and accelerating advances from the lab to the clinic.
Similarly, Dr. McInnes is deputy director of a novel translational medicine consortium comprising the Scottish universities of Aberdeen, Dundee, Edinburgh, and Glasgow, funded by the Wellcome Trust, a global charitable foundation, and Pfizer. Called the Scottish Translational Medicine and Therapeutics Initiative (STMTI), its mandate is to create a "new cadre" of academic clinicians with expertise in translational medicine and treatment by offering doctoral fellowship training programs for clinicians. "The goal is to meet the ongoing need for appropriately trained clinical investigators who have the basic science understanding who are backed by basic science training and expertise," he said. "Such individuals can act de facto as investigators, but also as catalysts. They can drive motivation within the basic science community and also engage the clinical community."
Importantly, however, translational activity cannot be forced, Dr. McInnes stressed. There are many ways to achieve collaboration, but, in my opinion, they all require one thing: curiosity," he said. "If a scientist and clinical investigator share an interest in getting an answer to questions that are either identical or sufficiently close to each other, that is a fruitful platform for a successful translational partnership." For example, if a scientist can demonstrate that a pathway is present in the disease state of interest to the clinician, both the scientist and the clinician will be curious to understand how it operates within that disease state and that will drive their work."
Additionally, translational research efforts are most likely to succeed using a team approach, Dr. McInnes said. "It is essential to recognize the different disciplines necessary to properly address the complex issues in human disease," he stressed. "The team requires people coming together and being prepared to work together and possibly change their own way of thinking a little bit."
Efforts to foster such teams – particularly across public, private, and corporate sectors – can be hindered by administrative roadblocks, including concerns about obtaining consent from patients, intellectual property issues, and who stands to gain from knowledge obtained from any given study," Dr. Cronstein said. In addition, the "demonization" of the pharmaceutical industry in recent years "makes it difficult to collaborate with pharma without being penalized or at least having to run a major gauntlet of paperwork and review," he said. "Clearly, pharma will try to obtain a benefit for itself, but we can collaborate successfully to develop new drugs and new understanding of the diseases of the patients we care for."
At the end of the day, rheumatology is poised to gain much from such efforts. "The field of rheumatology stands to gain new understanding of the diseases that afflict our patients, new therapies for these diseases, and novel targets for development of drugs that can benefit our patients," said Dr. Cronstein. "The advantages of collaboration are overwhelming and the danger of fragmentation of efforts is critical."
The Translational Journey of JAKs
"Some of the most exciting translational research in rheumatology right now is the use of intracellular signal proteins as targets of small-molecule drugs," said Dr. McInnes. "There’s irony in this, because rheumatologists use small-molecule inhibitors all the time, including methotrexate and sulfasalazine. The difference is we’re now using new molecular entities that have been designed specifically to seek out some of the signal pathways that very elegant biology over the last 10-20 years has shown to have a role in inflammation."
The work by Dr. John O’Shea on cytokine signal transduction and the roles of janus kinases (JAKs) and signal transducer and activator of transcription (STAT) factors in immune cell development and differentiation are an excellent example, Dr. McInnes said. The research by Dr. O’Shea, scientific director of the intramural research program at the National Institute of Arthritis and Musculoskeletal and Skin Diseases at the National Institutes of Health in Bethesda, Md., led to an NIH patent related to JAKs as a new class of immunosuppressive drug. Through a cooperative research agreement with Pfizer, a JAK3 compound (tofacitinib) is currently in phase III trials.
"These drugs are still not licensed, but the [translational] success story is that the proof of concept that JAKs are involved in the pathogenesis has been achieved," Dr. McInnes said. "When you block [the molecules], patients get better. That doesn’t mean you’ve got a drug, but it does tell you the biology pans out pretty well."
The research developments in this scenario were very much driven around the science of the JAKs initially, "then the investigators looked at people whose immune systems didn’t work very well to see if JAKs were deficient in them, which they were," Dr. McInnes explained. "The next step was to flip back to the lab to consider whether that information could be therapeutically useful, and eventually it found its way back to rheumatology practice.
"Although we still don’t know all that much about how these pathways work in rheumatoid tissue," Dr. McInnes said, "the translation journey [of JAKs] thus far "is one to be admired."
Dr. McInnes and Dr. Cronstein reported no relevant conflicts of interest.
Translational medicine in rheumatology has benefitted from unsung but successful bridge-building efforts that facilitate mutually beneficial research relationships between basic scientists and clinicians working toward a common goal, and lately the fruits of such efforts have led to ground-breaking drug discoveries and therapeutic advances.
For rheumatology in particular, because the discipline encompasses multiple organ systems and diverse pathology, such translational research is critical for gaining an understanding of the complexities of the immune system and disease mechanisms and developing and testing treatment strategies, according to Dr. Iain McInnes, professor of experimental medicine and rheumatology at the University of Glasgow (Scotland). "It’s a simple concept, really. It’s the idea that we might understand more about the basic science pathways if we look at them operating in the real world. And it’s a two-way street. Basic science can in turn be informed by the clinical pathologic entity that people come to study."
Examples of "good translation stories," said Dr. McInnes, a clinician/scientist himself, include the tumor necrosis factor (TNF) blocking story and the interleukin-1 (IL-1) story. The development of the TNF blocking therapies, which have had a major impact on the treatment of rheumatoid arthritis and other inflammatory diseases, followed "a series of basic science and clinical medicine iterations, initially in the infectious diseases world and eventually in immunology and rheumatology," he said.
"The evolution of the IL-1 biology is an especially interesting translational story," Dr. McInnes acknowledged. Since it was first cloned in the 1980s, the cytokine family (IL-1a and IL-1b) was identified as a key player in regulating inflammatory processes. This led to the development of IL-1 inhibitors, which were tested primarily in rheumatoid arthritis, but with only modest success, he said. "But this helped investigators understand how IL-1 was synthesized and how its biology was regulated." Such insights eventually led to the development and testing of IL-1 directed agents such as anakinra and rilonacept in patients with hereditary autoinflammatory conditions and in nonhereditary inflammatory diseases associated with aberrant IL-1 signaling, including Mediterranean fever, Muckle-Wells syndrome, neonatal onset multisystem inflammatory disease, and gout. In turn, the efficacy of IL-1 blockade in the treatment of many of these conditions "has changed the understanding of these disorders," he said, and has led to investigation IL-1’s role in other diseases, including adult-onset Still’s disease and systemic juvenile idiopathic arthritis.
The IL-1 story is "elegant science" with respect to the back and forth between the lap and clinical correlates, Dr. McInnes explained. "Although it ultimately did not lead to a good rheumatoid arthritis treatment, the continual cross talk allowed relevance to be maintained and clinical pathology to direct the lab focus over time."
Despite the successes, the bench-to-bedside lag is real and sometimes inevitable given the actual and perceived separation between basic science and clinical research. One of the fundamental reasons for the schism, according to Dr. McInnes, is that scientists and clinicians often don’t share the same philosophy. "We have to be careful about generalizations but, from an academic point of view, a basic scientist is driven by trying to answer a legitimate scientific question: Why is the sky blue? Why are daffodils yellow? How does this chemical activate this target cell?" he said. "So in our area of autoimmune research, a basic scientist’s approach to cytokine research would be, ‘I’m just going to keep chasing down this molecule and its biological effects until I get the answer.’ "
The clinical investigator, on the other hand, "is trying to understand the complex matrix of events that lead to the pathology or disease state, so he or she may well rely on a basic scientific pathway for a certain period of time, but if it turns out that the pathway is really not involved in the disease pathogenesis after a period of investigation, the clinician will part company with that scientist." So while the scientist presses ahead trying to answer the basic scientific question, the clinician investigators will be driven by where the disease takes them, he said.
The separation between basic science and clinical investigation is not deliberate as much as it is circumstantial, according to rheumatologist Dr. Bruce Cronstein, director of the Clinical and Translational Science Institute at New York University. "Many bench researchers are not aware of whom to contact among their clinical colleagues to best obtain relevant collaboration, and clinicians are also unaware of whom to contact among their bench research colleagues," he said. Further, in the United States, "rewards for collaboration have not aligned well with the clinic requirements. Increasing pressure to maximize income [both personal and institutional] by seeing more patients has diminished the time for clinicians to collaborate by gathering extensive data about patients or biosamples," he said.
Also, the mechanisms for sharing credit are poorly defined and there are few mechanisms available for scientists to share grants and funding with their clinical colleagues, according to Dr. Cronstein. "Does a clinician become an author of a high-profile science paper for supplying one patient’s samples? Two? Twenty?" And although the National Institutes of Health has recently defined the coinvestigator mechanism, "even that tends to slight the noncorresponding author." Although the importance of bringing clinical and scientific colleagues together is generally understood to be critical, "finding the appropriate venues to do so is very difficult to accomplish in practice," he said.
In an ideal world, the science/clinical collaboration would be more fluid than fixed, whereby the clinical investigator studying a pathologic state "would interact with several different scientists who offer their appropriate expertise to tease out different components and different pathways," said Dr. McInnes. "One of the difficulties is that we often don’t know where the starting point is in the pathological journey. We are informed by the circumstances of where we are looking." In rheumatoid arthritis, for example, "a lot of our thinking [behind recent developments] was provoked by what the joint looks like after 5-10 years of disease. Maybe the next advance will be provoked by what the joint looks like after just a couple of days of arthritis, if we could get to that window," he said. Out of circumstance, then, the divide between basic and clinical research is process driven because there are many different processes at work at different phases even in one disease."
Each discipline also has its own distinct obstacles that can preclude fluid cooperation. "At the moment, the challenges of successfully translating a bit of science into an understanding of pathogenesis are gargantuan. There’s the ethics of looking at human tissue, the willingness of patient populations to participate, and just the pragmatics of whereabouts in the patient you can look," said Dr. McInnes. "If the disease is a disease of the lymph node or bone marrow, before it becomes a disease of the joint, it’s not impossible but it is quite difficult to get bits of lymph node and bone marrow from human beings, although that’s some of the real interesting science that underpins what the disease may be." Unlike cancer, in which biopsies are part of routine care and as such can more readily be used for scientific investigation, "it’s not routine to biopsy joint tissue or synovial lining in rheumatoid arthritis patients, even if it’s something you would like to do, and it’s absolutely not routine practice to biopsy lymph nodes or bone marrow. The ethics are marginal."
Finally, there are simply not enough properly trained clinician scientists, in medicine in general and rheumatology specifically, Dr. McInnes said. "When they do exist, they are often hard-pressed to meet the demands of both the scientific and clinical communities." Efforts to chip away at this particular barrier are underway, thanks to innovative collaborations, within and between universities and medical schools, often supported by government and/or pharmaceutical company funding. The university-wide Clinical and Translational Science Institute at NYU, for example, in partnership with the New York City Health and Hospitals Corporation and with funding support from the NIH, is one of more than 45 such centers nationwide charged with the task of training clinician scientists and accelerating advances from the lab to the clinic.
Similarly, Dr. McInnes is deputy director of a novel translational medicine consortium comprising the Scottish universities of Aberdeen, Dundee, Edinburgh, and Glasgow, funded by the Wellcome Trust, a global charitable foundation, and Pfizer. Called the Scottish Translational Medicine and Therapeutics Initiative (STMTI), its mandate is to create a "new cadre" of academic clinicians with expertise in translational medicine and treatment by offering doctoral fellowship training programs for clinicians. "The goal is to meet the ongoing need for appropriately trained clinical investigators who have the basic science understanding who are backed by basic science training and expertise," he said. "Such individuals can act de facto as investigators, but also as catalysts. They can drive motivation within the basic science community and also engage the clinical community."
Importantly, however, translational activity cannot be forced, Dr. McInnes stressed. There are many ways to achieve collaboration, but, in my opinion, they all require one thing: curiosity," he said. "If a scientist and clinical investigator share an interest in getting an answer to questions that are either identical or sufficiently close to each other, that is a fruitful platform for a successful translational partnership." For example, if a scientist can demonstrate that a pathway is present in the disease state of interest to the clinician, both the scientist and the clinician will be curious to understand how it operates within that disease state and that will drive their work."
Additionally, translational research efforts are most likely to succeed using a team approach, Dr. McInnes said. "It is essential to recognize the different disciplines necessary to properly address the complex issues in human disease," he stressed. "The team requires people coming together and being prepared to work together and possibly change their own way of thinking a little bit."
Efforts to foster such teams – particularly across public, private, and corporate sectors – can be hindered by administrative roadblocks, including concerns about obtaining consent from patients, intellectual property issues, and who stands to gain from knowledge obtained from any given study," Dr. Cronstein said. In addition, the "demonization" of the pharmaceutical industry in recent years "makes it difficult to collaborate with pharma without being penalized or at least having to run a major gauntlet of paperwork and review," he said. "Clearly, pharma will try to obtain a benefit for itself, but we can collaborate successfully to develop new drugs and new understanding of the diseases of the patients we care for."
At the end of the day, rheumatology is poised to gain much from such efforts. "The field of rheumatology stands to gain new understanding of the diseases that afflict our patients, new therapies for these diseases, and novel targets for development of drugs that can benefit our patients," said Dr. Cronstein. "The advantages of collaboration are overwhelming and the danger of fragmentation of efforts is critical."
The Translational Journey of JAKs
"Some of the most exciting translational research in rheumatology right now is the use of intracellular signal proteins as targets of small-molecule drugs," said Dr. McInnes. "There’s irony in this, because rheumatologists use small-molecule inhibitors all the time, including methotrexate and sulfasalazine. The difference is we’re now using new molecular entities that have been designed specifically to seek out some of the signal pathways that very elegant biology over the last 10-20 years has shown to have a role in inflammation."
The work by Dr. John O’Shea on cytokine signal transduction and the roles of janus kinases (JAKs) and signal transducer and activator of transcription (STAT) factors in immune cell development and differentiation are an excellent example, Dr. McInnes said. The research by Dr. O’Shea, scientific director of the intramural research program at the National Institute of Arthritis and Musculoskeletal and Skin Diseases at the National Institutes of Health in Bethesda, Md., led to an NIH patent related to JAKs as a new class of immunosuppressive drug. Through a cooperative research agreement with Pfizer, a JAK3 compound (tofacitinib) is currently in phase III trials.
"These drugs are still not licensed, but the [translational] success story is that the proof of concept that JAKs are involved in the pathogenesis has been achieved," Dr. McInnes said. "When you block [the molecules], patients get better. That doesn’t mean you’ve got a drug, but it does tell you the biology pans out pretty well."
The research developments in this scenario were very much driven around the science of the JAKs initially, "then the investigators looked at people whose immune systems didn’t work very well to see if JAKs were deficient in them, which they were," Dr. McInnes explained. "The next step was to flip back to the lab to consider whether that information could be therapeutically useful, and eventually it found its way back to rheumatology practice.
"Although we still don’t know all that much about how these pathways work in rheumatoid tissue," Dr. McInnes said, "the translation journey [of JAKs] thus far "is one to be admired."
Dr. McInnes and Dr. Cronstein reported no relevant conflicts of interest.
Translational medicine in rheumatology has benefitted from unsung but successful bridge-building efforts that facilitate mutually beneficial research relationships between basic scientists and clinicians working toward a common goal, and lately the fruits of such efforts have led to ground-breaking drug discoveries and therapeutic advances.
For rheumatology in particular, because the discipline encompasses multiple organ systems and diverse pathology, such translational research is critical for gaining an understanding of the complexities of the immune system and disease mechanisms and developing and testing treatment strategies, according to Dr. Iain McInnes, professor of experimental medicine and rheumatology at the University of Glasgow (Scotland). "It’s a simple concept, really. It’s the idea that we might understand more about the basic science pathways if we look at them operating in the real world. And it’s a two-way street. Basic science can in turn be informed by the clinical pathologic entity that people come to study."
Examples of "good translation stories," said Dr. McInnes, a clinician/scientist himself, include the tumor necrosis factor (TNF) blocking story and the interleukin-1 (IL-1) story. The development of the TNF blocking therapies, which have had a major impact on the treatment of rheumatoid arthritis and other inflammatory diseases, followed "a series of basic science and clinical medicine iterations, initially in the infectious diseases world and eventually in immunology and rheumatology," he said.
"The evolution of the IL-1 biology is an especially interesting translational story," Dr. McInnes acknowledged. Since it was first cloned in the 1980s, the cytokine family (IL-1a and IL-1b) was identified as a key player in regulating inflammatory processes. This led to the development of IL-1 inhibitors, which were tested primarily in rheumatoid arthritis, but with only modest success, he said. "But this helped investigators understand how IL-1 was synthesized and how its biology was regulated." Such insights eventually led to the development and testing of IL-1 directed agents such as anakinra and rilonacept in patients with hereditary autoinflammatory conditions and in nonhereditary inflammatory diseases associated with aberrant IL-1 signaling, including Mediterranean fever, Muckle-Wells syndrome, neonatal onset multisystem inflammatory disease, and gout. In turn, the efficacy of IL-1 blockade in the treatment of many of these conditions "has changed the understanding of these disorders," he said, and has led to investigation IL-1’s role in other diseases, including adult-onset Still’s disease and systemic juvenile idiopathic arthritis.
The IL-1 story is "elegant science" with respect to the back and forth between the lap and clinical correlates, Dr. McInnes explained. "Although it ultimately did not lead to a good rheumatoid arthritis treatment, the continual cross talk allowed relevance to be maintained and clinical pathology to direct the lab focus over time."
Despite the successes, the bench-to-bedside lag is real and sometimes inevitable given the actual and perceived separation between basic science and clinical research. One of the fundamental reasons for the schism, according to Dr. McInnes, is that scientists and clinicians often don’t share the same philosophy. "We have to be careful about generalizations but, from an academic point of view, a basic scientist is driven by trying to answer a legitimate scientific question: Why is the sky blue? Why are daffodils yellow? How does this chemical activate this target cell?" he said. "So in our area of autoimmune research, a basic scientist’s approach to cytokine research would be, ‘I’m just going to keep chasing down this molecule and its biological effects until I get the answer.’ "
The clinical investigator, on the other hand, "is trying to understand the complex matrix of events that lead to the pathology or disease state, so he or she may well rely on a basic scientific pathway for a certain period of time, but if it turns out that the pathway is really not involved in the disease pathogenesis after a period of investigation, the clinician will part company with that scientist." So while the scientist presses ahead trying to answer the basic scientific question, the clinician investigators will be driven by where the disease takes them, he said.
The separation between basic science and clinical investigation is not deliberate as much as it is circumstantial, according to rheumatologist Dr. Bruce Cronstein, director of the Clinical and Translational Science Institute at New York University. "Many bench researchers are not aware of whom to contact among their clinical colleagues to best obtain relevant collaboration, and clinicians are also unaware of whom to contact among their bench research colleagues," he said. Further, in the United States, "rewards for collaboration have not aligned well with the clinic requirements. Increasing pressure to maximize income [both personal and institutional] by seeing more patients has diminished the time for clinicians to collaborate by gathering extensive data about patients or biosamples," he said.
Also, the mechanisms for sharing credit are poorly defined and there are few mechanisms available for scientists to share grants and funding with their clinical colleagues, according to Dr. Cronstein. "Does a clinician become an author of a high-profile science paper for supplying one patient’s samples? Two? Twenty?" And although the National Institutes of Health has recently defined the coinvestigator mechanism, "even that tends to slight the noncorresponding author." Although the importance of bringing clinical and scientific colleagues together is generally understood to be critical, "finding the appropriate venues to do so is very difficult to accomplish in practice," he said.
In an ideal world, the science/clinical collaboration would be more fluid than fixed, whereby the clinical investigator studying a pathologic state "would interact with several different scientists who offer their appropriate expertise to tease out different components and different pathways," said Dr. McInnes. "One of the difficulties is that we often don’t know where the starting point is in the pathological journey. We are informed by the circumstances of where we are looking." In rheumatoid arthritis, for example, "a lot of our thinking [behind recent developments] was provoked by what the joint looks like after 5-10 years of disease. Maybe the next advance will be provoked by what the joint looks like after just a couple of days of arthritis, if we could get to that window," he said. Out of circumstance, then, the divide between basic and clinical research is process driven because there are many different processes at work at different phases even in one disease."
Each discipline also has its own distinct obstacles that can preclude fluid cooperation. "At the moment, the challenges of successfully translating a bit of science into an understanding of pathogenesis are gargantuan. There’s the ethics of looking at human tissue, the willingness of patient populations to participate, and just the pragmatics of whereabouts in the patient you can look," said Dr. McInnes. "If the disease is a disease of the lymph node or bone marrow, before it becomes a disease of the joint, it’s not impossible but it is quite difficult to get bits of lymph node and bone marrow from human beings, although that’s some of the real interesting science that underpins what the disease may be." Unlike cancer, in which biopsies are part of routine care and as such can more readily be used for scientific investigation, "it’s not routine to biopsy joint tissue or synovial lining in rheumatoid arthritis patients, even if it’s something you would like to do, and it’s absolutely not routine practice to biopsy lymph nodes or bone marrow. The ethics are marginal."
Finally, there are simply not enough properly trained clinician scientists, in medicine in general and rheumatology specifically, Dr. McInnes said. "When they do exist, they are often hard-pressed to meet the demands of both the scientific and clinical communities." Efforts to chip away at this particular barrier are underway, thanks to innovative collaborations, within and between universities and medical schools, often supported by government and/or pharmaceutical company funding. The university-wide Clinical and Translational Science Institute at NYU, for example, in partnership with the New York City Health and Hospitals Corporation and with funding support from the NIH, is one of more than 45 such centers nationwide charged with the task of training clinician scientists and accelerating advances from the lab to the clinic.
Similarly, Dr. McInnes is deputy director of a novel translational medicine consortium comprising the Scottish universities of Aberdeen, Dundee, Edinburgh, and Glasgow, funded by the Wellcome Trust, a global charitable foundation, and Pfizer. Called the Scottish Translational Medicine and Therapeutics Initiative (STMTI), its mandate is to create a "new cadre" of academic clinicians with expertise in translational medicine and treatment by offering doctoral fellowship training programs for clinicians. "The goal is to meet the ongoing need for appropriately trained clinical investigators who have the basic science understanding who are backed by basic science training and expertise," he said. "Such individuals can act de facto as investigators, but also as catalysts. They can drive motivation within the basic science community and also engage the clinical community."
Importantly, however, translational activity cannot be forced, Dr. McInnes stressed. There are many ways to achieve collaboration, but, in my opinion, they all require one thing: curiosity," he said. "If a scientist and clinical investigator share an interest in getting an answer to questions that are either identical or sufficiently close to each other, that is a fruitful platform for a successful translational partnership." For example, if a scientist can demonstrate that a pathway is present in the disease state of interest to the clinician, both the scientist and the clinician will be curious to understand how it operates within that disease state and that will drive their work."
Additionally, translational research efforts are most likely to succeed using a team approach, Dr. McInnes said. "It is essential to recognize the different disciplines necessary to properly address the complex issues in human disease," he stressed. "The team requires people coming together and being prepared to work together and possibly change their own way of thinking a little bit."
Efforts to foster such teams – particularly across public, private, and corporate sectors – can be hindered by administrative roadblocks, including concerns about obtaining consent from patients, intellectual property issues, and who stands to gain from knowledge obtained from any given study," Dr. Cronstein said. In addition, the "demonization" of the pharmaceutical industry in recent years "makes it difficult to collaborate with pharma without being penalized or at least having to run a major gauntlet of paperwork and review," he said. "Clearly, pharma will try to obtain a benefit for itself, but we can collaborate successfully to develop new drugs and new understanding of the diseases of the patients we care for."
At the end of the day, rheumatology is poised to gain much from such efforts. "The field of rheumatology stands to gain new understanding of the diseases that afflict our patients, new therapies for these diseases, and novel targets for development of drugs that can benefit our patients," said Dr. Cronstein. "The advantages of collaboration are overwhelming and the danger of fragmentation of efforts is critical."
The Translational Journey of JAKs
"Some of the most exciting translational research in rheumatology right now is the use of intracellular signal proteins as targets of small-molecule drugs," said Dr. McInnes. "There’s irony in this, because rheumatologists use small-molecule inhibitors all the time, including methotrexate and sulfasalazine. The difference is we’re now using new molecular entities that have been designed specifically to seek out some of the signal pathways that very elegant biology over the last 10-20 years has shown to have a role in inflammation."
The work by Dr. John O’Shea on cytokine signal transduction and the roles of janus kinases (JAKs) and signal transducer and activator of transcription (STAT) factors in immune cell development and differentiation are an excellent example, Dr. McInnes said. The research by Dr. O’Shea, scientific director of the intramural research program at the National Institute of Arthritis and Musculoskeletal and Skin Diseases at the National Institutes of Health in Bethesda, Md., led to an NIH patent related to JAKs as a new class of immunosuppressive drug. Through a cooperative research agreement with Pfizer, a JAK3 compound (tofacitinib) is currently in phase III trials.
"These drugs are still not licensed, but the [translational] success story is that the proof of concept that JAKs are involved in the pathogenesis has been achieved," Dr. McInnes said. "When you block [the molecules], patients get better. That doesn’t mean you’ve got a drug, but it does tell you the biology pans out pretty well."
The research developments in this scenario were very much driven around the science of the JAKs initially, "then the investigators looked at people whose immune systems didn’t work very well to see if JAKs were deficient in them, which they were," Dr. McInnes explained. "The next step was to flip back to the lab to consider whether that information could be therapeutically useful, and eventually it found its way back to rheumatology practice.
"Although we still don’t know all that much about how these pathways work in rheumatoid tissue," Dr. McInnes said, "the translation journey [of JAKs] thus far "is one to be admired."
Dr. McInnes and Dr. Cronstein reported no relevant conflicts of interest.
Oral Vitamin D May Avert Lupus Inflammation : Vitamin D supplements lower levels of inflammatory and hemostatic biomarkers.
The science supporting vitamin D supplementation in lupus patients is catching up to the recommendation that all patients with the autoimmune disease increase their intake of the fat-soluble secosteroids.
Findings from a new study by Dr. Suzan Abou-Raya, professor of geriatric medicine at the University of Alexandria (Egypt), and her associates demonstrate that there is a high prevalence of vitamin D deficiency associated with an increased inflammatory burden and thrombophilic state in patients with systemic lupus erythematosus (SLE). The findings also suggest that oral vitamin D supplementation ameliorates chronic inflammatory and hemostatic markers in this patient group.
The use of supplementary calcium and vitamin D is routinely recommended for SLE patients to help minimize the bone loss and increased risk of developing osteoporosis associated with the disease and its treatment. Beyond supporting bone and mineral hemostasis, “vitamin D is now recognized as having additional pleiotropic roles,” according to Dr. Abou-Raya. “We've learned that it has potent immunomodulatory properties that have promoted its potential use in the treatment of autoimmune conditions, including lupus.”
The study was designed to evaluate vitamin D status in lupus patients and to assess changes in disease-related inflammatory and hemostatic markers before and after vitamin D supplementation.
To do this, Dr. Abou-Raya and her fellow researchers conducted a randomized, placebo-controlled trial comprising 148 males and premenopausal females who fulfilled the ACR (American College of Rheumatology) classification criteria for SLE. Also enrolled in the study were 75 lupus-free adults who served as controls and who matched the cases in age, sex, ethnicity, and body mass index.
Individuals with other inflammatory disorders and those taking supplemental vitamin D at the time of the study were excluded from participation.
Study patients were randomized in a 1:1 fashion to receive either 2,000 IU per day of oral cholecalciferol (vitamin D3) or placebo for 6 months together with standard SLE treatment, Dr. Abou-Raya said.
Before and after 6 months of vitamin D supplementation, the investigators evaluated disease activity using the SLE disease activity index (SLEDAI), levels of serum 25-hydroxyvitamin D (25[OH]D) via DiaSorin's Liaison immunoassay, levels of proinflammatory cytokines interleukin-1 (IL-1), IL-6, IL-18, tumor necrosis factor (TNF)–alpha, C-reactive protein (CRP), and the hemostatic markers fibrinogen and von Willebrand factor (vWF).
Individuals with 25(OH)D levels of 10-30 ng/mL were classified as having vitamin D insufficiency and those with levels lower than 10 ng/mL were considered vitamin D deficient, she noted.
With respect to baseline demographics, the mean age of the SLE patients was 38.8 years and the mean disease duration was 5.2 years. The mean baseline vitamin D level in the SLE patients was 19.8 ng/mL, which was significantly lower than the mean 28.7 ng/mL in the control group, Dr. Abou-Raya reported. The baseline levels of the inflammatory and hemostatic markers were significantly higher in the SLE patients. “The overall prevalence of vitamin D insufficiency and deficiency, respectively, was 69% and 39%,” she said.
At 6 months, “there was a significant decrease in levels of inflammatory and hemostatic makers in lupus patients who were supplemented with vitamin D” compared with patients who were given placebo together with ongoing therapy, Dr. Abou-Raya reported at the annual European Congress of Rheumatology in London.
After multivariate adjustment, the investigators observed a negative correlation between vitamin D levels and IL-1, IL-6, IL-18, TNF-alpha, CRP, fibrinogen, and vWF, “and lower vitamin D levels were associated with significantly higher SLEDAI scores,” she said.
The results suggest that hypovitaminosis D contributes to a chronic inflammatory and thrombophilic state in SLE patients, said Dr. Abou-Raya. “The findings support the routine recommendation for oral vitamin D supplementation in these patients,” she said.
Dr. Abou-Raya disclosed having no financial conflicts of interest related to her presentation.
The science supporting vitamin D supplementation in lupus patients is catching up to the recommendation that all patients with the autoimmune disease increase their intake of the fat-soluble secosteroids.
Findings from a new study by Dr. Suzan Abou-Raya, professor of geriatric medicine at the University of Alexandria (Egypt), and her associates demonstrate that there is a high prevalence of vitamin D deficiency associated with an increased inflammatory burden and thrombophilic state in patients with systemic lupus erythematosus (SLE). The findings also suggest that oral vitamin D supplementation ameliorates chronic inflammatory and hemostatic markers in this patient group.
The use of supplementary calcium and vitamin D is routinely recommended for SLE patients to help minimize the bone loss and increased risk of developing osteoporosis associated with the disease and its treatment. Beyond supporting bone and mineral hemostasis, “vitamin D is now recognized as having additional pleiotropic roles,” according to Dr. Abou-Raya. “We've learned that it has potent immunomodulatory properties that have promoted its potential use in the treatment of autoimmune conditions, including lupus.”
The study was designed to evaluate vitamin D status in lupus patients and to assess changes in disease-related inflammatory and hemostatic markers before and after vitamin D supplementation.
To do this, Dr. Abou-Raya and her fellow researchers conducted a randomized, placebo-controlled trial comprising 148 males and premenopausal females who fulfilled the ACR (American College of Rheumatology) classification criteria for SLE. Also enrolled in the study were 75 lupus-free adults who served as controls and who matched the cases in age, sex, ethnicity, and body mass index.
Individuals with other inflammatory disorders and those taking supplemental vitamin D at the time of the study were excluded from participation.
Study patients were randomized in a 1:1 fashion to receive either 2,000 IU per day of oral cholecalciferol (vitamin D3) or placebo for 6 months together with standard SLE treatment, Dr. Abou-Raya said.
Before and after 6 months of vitamin D supplementation, the investigators evaluated disease activity using the SLE disease activity index (SLEDAI), levels of serum 25-hydroxyvitamin D (25[OH]D) via DiaSorin's Liaison immunoassay, levels of proinflammatory cytokines interleukin-1 (IL-1), IL-6, IL-18, tumor necrosis factor (TNF)–alpha, C-reactive protein (CRP), and the hemostatic markers fibrinogen and von Willebrand factor (vWF).
Individuals with 25(OH)D levels of 10-30 ng/mL were classified as having vitamin D insufficiency and those with levels lower than 10 ng/mL were considered vitamin D deficient, she noted.
With respect to baseline demographics, the mean age of the SLE patients was 38.8 years and the mean disease duration was 5.2 years. The mean baseline vitamin D level in the SLE patients was 19.8 ng/mL, which was significantly lower than the mean 28.7 ng/mL in the control group, Dr. Abou-Raya reported. The baseline levels of the inflammatory and hemostatic markers were significantly higher in the SLE patients. “The overall prevalence of vitamin D insufficiency and deficiency, respectively, was 69% and 39%,” she said.
At 6 months, “there was a significant decrease in levels of inflammatory and hemostatic makers in lupus patients who were supplemented with vitamin D” compared with patients who were given placebo together with ongoing therapy, Dr. Abou-Raya reported at the annual European Congress of Rheumatology in London.
After multivariate adjustment, the investigators observed a negative correlation between vitamin D levels and IL-1, IL-6, IL-18, TNF-alpha, CRP, fibrinogen, and vWF, “and lower vitamin D levels were associated with significantly higher SLEDAI scores,” she said.
The results suggest that hypovitaminosis D contributes to a chronic inflammatory and thrombophilic state in SLE patients, said Dr. Abou-Raya. “The findings support the routine recommendation for oral vitamin D supplementation in these patients,” she said.
Dr. Abou-Raya disclosed having no financial conflicts of interest related to her presentation.
The science supporting vitamin D supplementation in lupus patients is catching up to the recommendation that all patients with the autoimmune disease increase their intake of the fat-soluble secosteroids.
Findings from a new study by Dr. Suzan Abou-Raya, professor of geriatric medicine at the University of Alexandria (Egypt), and her associates demonstrate that there is a high prevalence of vitamin D deficiency associated with an increased inflammatory burden and thrombophilic state in patients with systemic lupus erythematosus (SLE). The findings also suggest that oral vitamin D supplementation ameliorates chronic inflammatory and hemostatic markers in this patient group.
The use of supplementary calcium and vitamin D is routinely recommended for SLE patients to help minimize the bone loss and increased risk of developing osteoporosis associated with the disease and its treatment. Beyond supporting bone and mineral hemostasis, “vitamin D is now recognized as having additional pleiotropic roles,” according to Dr. Abou-Raya. “We've learned that it has potent immunomodulatory properties that have promoted its potential use in the treatment of autoimmune conditions, including lupus.”
The study was designed to evaluate vitamin D status in lupus patients and to assess changes in disease-related inflammatory and hemostatic markers before and after vitamin D supplementation.
To do this, Dr. Abou-Raya and her fellow researchers conducted a randomized, placebo-controlled trial comprising 148 males and premenopausal females who fulfilled the ACR (American College of Rheumatology) classification criteria for SLE. Also enrolled in the study were 75 lupus-free adults who served as controls and who matched the cases in age, sex, ethnicity, and body mass index.
Individuals with other inflammatory disorders and those taking supplemental vitamin D at the time of the study were excluded from participation.
Study patients were randomized in a 1:1 fashion to receive either 2,000 IU per day of oral cholecalciferol (vitamin D3) or placebo for 6 months together with standard SLE treatment, Dr. Abou-Raya said.
Before and after 6 months of vitamin D supplementation, the investigators evaluated disease activity using the SLE disease activity index (SLEDAI), levels of serum 25-hydroxyvitamin D (25[OH]D) via DiaSorin's Liaison immunoassay, levels of proinflammatory cytokines interleukin-1 (IL-1), IL-6, IL-18, tumor necrosis factor (TNF)–alpha, C-reactive protein (CRP), and the hemostatic markers fibrinogen and von Willebrand factor (vWF).
Individuals with 25(OH)D levels of 10-30 ng/mL were classified as having vitamin D insufficiency and those with levels lower than 10 ng/mL were considered vitamin D deficient, she noted.
With respect to baseline demographics, the mean age of the SLE patients was 38.8 years and the mean disease duration was 5.2 years. The mean baseline vitamin D level in the SLE patients was 19.8 ng/mL, which was significantly lower than the mean 28.7 ng/mL in the control group, Dr. Abou-Raya reported. The baseline levels of the inflammatory and hemostatic markers were significantly higher in the SLE patients. “The overall prevalence of vitamin D insufficiency and deficiency, respectively, was 69% and 39%,” she said.
At 6 months, “there was a significant decrease in levels of inflammatory and hemostatic makers in lupus patients who were supplemented with vitamin D” compared with patients who were given placebo together with ongoing therapy, Dr. Abou-Raya reported at the annual European Congress of Rheumatology in London.
After multivariate adjustment, the investigators observed a negative correlation between vitamin D levels and IL-1, IL-6, IL-18, TNF-alpha, CRP, fibrinogen, and vWF, “and lower vitamin D levels were associated with significantly higher SLEDAI scores,” she said.
The results suggest that hypovitaminosis D contributes to a chronic inflammatory and thrombophilic state in SLE patients, said Dr. Abou-Raya. “The findings support the routine recommendation for oral vitamin D supplementation in these patients,” she said.
Dr. Abou-Raya disclosed having no financial conflicts of interest related to her presentation.
Two Questions Best Surveys for Sleep Apnea in Pregnancy
Major Finding: In screening for sleep apnea via prepregnancy BMI plus self-reported snoring, sensitivity was 74% and specificity was 59%, compared with 35% and 69% for standard screening measures.
Data Source: A prospective study comparing the accuracy of standard sleep apnea screening measures to a two-question approach based on prepregnancy BMI and self-reported snoring in 86 women with high-risk pregnancies.
Disclosures: Dr. Facco reported having no financial conflicts.
MINNEAPOLIS – A two-question screening tool for sleep apnea yielded more accurate results than did standard screening tools, a study has shown.
“Using prepregnancy body mass index and self-reported snoring had a much better sensitivity than the conventional methods, without sacrificing much specificity,” Dr. Francesca L. Facco reported at the meeting.
In a cohort of pregnant women who completed a sleep survey and participated in an overnight sleep evaluation, the two-question screening approach yielded more accurate results than did standard screening tools, including the Berlin Questionnaire (BQ) and the Epworth Sleepiness Scale (ESS), she said.
To compare the screening approaches, Dr. Facco of the department of ob.gyn. at Northwestern University, Chicago, and her colleagues recruited 86 high-risk pregnant women, including those with chronic hypertension, pregestational diabetes, obesity, or a prior history of preeclampsia, to complete the sleep survey, which consisted of the BQ and ESS measures.
The women also underwent an overnight sleep evaluation using Itamar Medical's Watch-PAT100 (WP100), a wrist-mounted, ambulatory device designed to diagnose sleep apnea, Dr. Facco said.
For this study, sleep apnea was defined as an apnea-hypopnea index score of five or more episodes of disturbed sleep per hour.
Patients' prepregnancy BMI and self-reporting snoring status were recorded as well.
“Patients with a prepregnancy BMI of 25 [kg/m
The investigators assessed the performance of the BQ, ESS, and two-question measures relative to the data acquired from the WP100 devices using receiver operating characteristic (ROC) curves, and determined that the two-question approach performed better than the BQ alone, the BQ and ESS combined, and the null hypothesis, according to Dr. Facco.
The sensitivity of the combined BQ and ESS was 35% and the specificity was 69%, compared with 74% and 59%, respectively, for the two-question approach. “The results suggest that standard screening tools for sleep apnea, which have a high sensitivity and specificity in nonpregnant individuals, are inadequate for the assessment of sleep apnea in pregnancy,” Dr. Facco said.
Modifications that take into account the predictive value of prepregnancy BMI and snoring are warranted, she said, stressing that additional studies are needed to design and test the most appropriate measure for sleep apnea screening in pregnancy.
Because sleep apnea may be associated with complications during pregnancy and with adverse pregnancy outcomes, screening for the disorder should be considered for all pregnant women, and particularly those who are considered to be at high risk, Dr. Facco noted.
Major Finding: In screening for sleep apnea via prepregnancy BMI plus self-reported snoring, sensitivity was 74% and specificity was 59%, compared with 35% and 69% for standard screening measures.
Data Source: A prospective study comparing the accuracy of standard sleep apnea screening measures to a two-question approach based on prepregnancy BMI and self-reported snoring in 86 women with high-risk pregnancies.
Disclosures: Dr. Facco reported having no financial conflicts.
MINNEAPOLIS – A two-question screening tool for sleep apnea yielded more accurate results than did standard screening tools, a study has shown.
“Using prepregnancy body mass index and self-reported snoring had a much better sensitivity than the conventional methods, without sacrificing much specificity,” Dr. Francesca L. Facco reported at the meeting.
In a cohort of pregnant women who completed a sleep survey and participated in an overnight sleep evaluation, the two-question screening approach yielded more accurate results than did standard screening tools, including the Berlin Questionnaire (BQ) and the Epworth Sleepiness Scale (ESS), she said.
To compare the screening approaches, Dr. Facco of the department of ob.gyn. at Northwestern University, Chicago, and her colleagues recruited 86 high-risk pregnant women, including those with chronic hypertension, pregestational diabetes, obesity, or a prior history of preeclampsia, to complete the sleep survey, which consisted of the BQ and ESS measures.
The women also underwent an overnight sleep evaluation using Itamar Medical's Watch-PAT100 (WP100), a wrist-mounted, ambulatory device designed to diagnose sleep apnea, Dr. Facco said.
For this study, sleep apnea was defined as an apnea-hypopnea index score of five or more episodes of disturbed sleep per hour.
Patients' prepregnancy BMI and self-reporting snoring status were recorded as well.
“Patients with a prepregnancy BMI of 25 [kg/m
The investigators assessed the performance of the BQ, ESS, and two-question measures relative to the data acquired from the WP100 devices using receiver operating characteristic (ROC) curves, and determined that the two-question approach performed better than the BQ alone, the BQ and ESS combined, and the null hypothesis, according to Dr. Facco.
The sensitivity of the combined BQ and ESS was 35% and the specificity was 69%, compared with 74% and 59%, respectively, for the two-question approach. “The results suggest that standard screening tools for sleep apnea, which have a high sensitivity and specificity in nonpregnant individuals, are inadequate for the assessment of sleep apnea in pregnancy,” Dr. Facco said.
Modifications that take into account the predictive value of prepregnancy BMI and snoring are warranted, she said, stressing that additional studies are needed to design and test the most appropriate measure for sleep apnea screening in pregnancy.
Because sleep apnea may be associated with complications during pregnancy and with adverse pregnancy outcomes, screening for the disorder should be considered for all pregnant women, and particularly those who are considered to be at high risk, Dr. Facco noted.
Major Finding: In screening for sleep apnea via prepregnancy BMI plus self-reported snoring, sensitivity was 74% and specificity was 59%, compared with 35% and 69% for standard screening measures.
Data Source: A prospective study comparing the accuracy of standard sleep apnea screening measures to a two-question approach based on prepregnancy BMI and self-reported snoring in 86 women with high-risk pregnancies.
Disclosures: Dr. Facco reported having no financial conflicts.
MINNEAPOLIS – A two-question screening tool for sleep apnea yielded more accurate results than did standard screening tools, a study has shown.
“Using prepregnancy body mass index and self-reported snoring had a much better sensitivity than the conventional methods, without sacrificing much specificity,” Dr. Francesca L. Facco reported at the meeting.
In a cohort of pregnant women who completed a sleep survey and participated in an overnight sleep evaluation, the two-question screening approach yielded more accurate results than did standard screening tools, including the Berlin Questionnaire (BQ) and the Epworth Sleepiness Scale (ESS), she said.
To compare the screening approaches, Dr. Facco of the department of ob.gyn. at Northwestern University, Chicago, and her colleagues recruited 86 high-risk pregnant women, including those with chronic hypertension, pregestational diabetes, obesity, or a prior history of preeclampsia, to complete the sleep survey, which consisted of the BQ and ESS measures.
The women also underwent an overnight sleep evaluation using Itamar Medical's Watch-PAT100 (WP100), a wrist-mounted, ambulatory device designed to diagnose sleep apnea, Dr. Facco said.
For this study, sleep apnea was defined as an apnea-hypopnea index score of five or more episodes of disturbed sleep per hour.
Patients' prepregnancy BMI and self-reporting snoring status were recorded as well.
“Patients with a prepregnancy BMI of 25 [kg/m
The investigators assessed the performance of the BQ, ESS, and two-question measures relative to the data acquired from the WP100 devices using receiver operating characteristic (ROC) curves, and determined that the two-question approach performed better than the BQ alone, the BQ and ESS combined, and the null hypothesis, according to Dr. Facco.
The sensitivity of the combined BQ and ESS was 35% and the specificity was 69%, compared with 74% and 59%, respectively, for the two-question approach. “The results suggest that standard screening tools for sleep apnea, which have a high sensitivity and specificity in nonpregnant individuals, are inadequate for the assessment of sleep apnea in pregnancy,” Dr. Facco said.
Modifications that take into account the predictive value of prepregnancy BMI and snoring are warranted, she said, stressing that additional studies are needed to design and test the most appropriate measure for sleep apnea screening in pregnancy.
Because sleep apnea may be associated with complications during pregnancy and with adverse pregnancy outcomes, screening for the disorder should be considered for all pregnant women, and particularly those who are considered to be at high risk, Dr. Facco noted.
From the Annual Meeting of the Associated Professional Sleep Societies
More Adverse Outcomes in Severe Sleep Apnea
Major Finding: In the no, mild, and moderate to severe sleep disordered breathing groups, the composite adverse pregnancy outcome rates were 18.1%, 23.5%, and 38.5%, respectively.
Data Source: A retrospective cohort study to assess the association between sleep-disordered breathing and adverse pregnancy outcomes in 150 women who had a delivery and an in-laboratory polysomnogram between January 2000 and June 2009.
Disclosures: Dr. Facco said she had no relevant financial disclosures.
MINNEAPOLIS – Women with sleep-disordered breathing have an increased likelihood of adverse pregnancy outcomes, but it is unclear whether the heightened risk can be attributed primarily to the breathing disorder or to obesity, reported lead investigator Dr. Francesca L. Facco.
Sleep disordered breathing (SDB) occurs in approximately 2% of the female population and has been linked to cardiovascular and metabolic morbidities and mortality in nonpregnant populations, said Dr. Facco of the department of ob.gyn. at Northwestern University in Chicago. “There is some evidence that pregnancy may precipitate or exacerbate the condition, and that there may be a relationship between intrauterine fetal growth retardation and maternal preeclampsia.” Unfortunately, “few studies have examined the relationship between abnormal respiratory patterns or quality of ventilation during sleep in pregnancy and adverse obstetrical outcomes, which is what we sought to do in this investigation” she said at the meeting.
Toward this end, Dr. Facco and her colleagues conducted a retrospective cohort study, using ICD-9 codes to identify women who had a delivery and an in-laboratory polysomnogram at their institution between January 2000 and June 2009. They reviewed the medical charts of 150 patients and abstracted data on demographics, sleep study results, and pregnancy outcomes. “In women with more than one pregnancy, we looked at the first pregnancy with outcome information,” she explained.
The study's primary outcome was adverse pregnancy outcome, which was defined as pregnancy-induced hypertension, gestational diabetes, and early preterm birth (at or before 34 weeks' gestation), Dr. Facco said.
The apnea-hypopnea index (AHI) was used to classify the presence and degree of SDB, with an AHI of fewer than 5 breathing pauses per hour indicating no SDB, an AHI of 5-14.9 pauses per hour indicating mild to moderate SDB, and an AHI of 15 or more pauses per hour suggesting a severe condition, she said. The associations between SBD and adverse pregnancy outcomes were evaluated using a Chi-square test for trend.
Of the 150 women included in the investigation, 61% were nulliparous at the time of their first documented delivery at the study hospital, 72% had undergone a polysomnogram within 3 years of their delivery, and 86.7% were overweight or obese (defined as a body mass index of 25 kg/m
An analysis of the findings demonstrated a significant association between SDB and adverse pregnancy outcome.
“The incidence of adverse pregnancy outcomes was highest in women with severe sleep apnea,” she said, noting that the increased prevalence was principally driven by a higher incidence of gestational diabetes and early preterm birth.
In the no, mild, and moderate to severe SDB groups, respectively, researchers found the following:
▸ The composite adverse pregnancy outcome rates were 18.1%, 23.5%, and 38.5%.
▸ The gestational diabetes rates were 0%, 5.9%, and 11.5%.
▸ The preterm birth rates were 4.7%, 5.9%, and 15.4%.
▸ The pregnancy-induced hypertension rates were 16.9%, 17.6%, and 15.4%.
“Gestational diabetes has been independently associated with maternal obesity, as has preterm birth and low birth weight,” Dr. Facco said in an interview. “In this population, nearly 87% of the women who had [SDB] were also obese, making it an obvious confounding factor.”
Further prospective studies are needed to assess the independent impact of SDB on maternal and neonatal health, and if the independent association is confirmed, additional studies on the role of treatment in pregnancy would be needed, Dr. Facco said.
In this population, nearly 87% of the women who had sleep disordered breathing were also obese, making it an obvious confounding factor.
Source Catherine Harrell/Elsevier Global Medical News
Major Finding: In the no, mild, and moderate to severe sleep disordered breathing groups, the composite adverse pregnancy outcome rates were 18.1%, 23.5%, and 38.5%, respectively.
Data Source: A retrospective cohort study to assess the association between sleep-disordered breathing and adverse pregnancy outcomes in 150 women who had a delivery and an in-laboratory polysomnogram between January 2000 and June 2009.
Disclosures: Dr. Facco said she had no relevant financial disclosures.
MINNEAPOLIS – Women with sleep-disordered breathing have an increased likelihood of adverse pregnancy outcomes, but it is unclear whether the heightened risk can be attributed primarily to the breathing disorder or to obesity, reported lead investigator Dr. Francesca L. Facco.
Sleep disordered breathing (SDB) occurs in approximately 2% of the female population and has been linked to cardiovascular and metabolic morbidities and mortality in nonpregnant populations, said Dr. Facco of the department of ob.gyn. at Northwestern University in Chicago. “There is some evidence that pregnancy may precipitate or exacerbate the condition, and that there may be a relationship between intrauterine fetal growth retardation and maternal preeclampsia.” Unfortunately, “few studies have examined the relationship between abnormal respiratory patterns or quality of ventilation during sleep in pregnancy and adverse obstetrical outcomes, which is what we sought to do in this investigation” she said at the meeting.
Toward this end, Dr. Facco and her colleagues conducted a retrospective cohort study, using ICD-9 codes to identify women who had a delivery and an in-laboratory polysomnogram at their institution between January 2000 and June 2009. They reviewed the medical charts of 150 patients and abstracted data on demographics, sleep study results, and pregnancy outcomes. “In women with more than one pregnancy, we looked at the first pregnancy with outcome information,” she explained.
The study's primary outcome was adverse pregnancy outcome, which was defined as pregnancy-induced hypertension, gestational diabetes, and early preterm birth (at or before 34 weeks' gestation), Dr. Facco said.
The apnea-hypopnea index (AHI) was used to classify the presence and degree of SDB, with an AHI of fewer than 5 breathing pauses per hour indicating no SDB, an AHI of 5-14.9 pauses per hour indicating mild to moderate SDB, and an AHI of 15 or more pauses per hour suggesting a severe condition, she said. The associations between SBD and adverse pregnancy outcomes were evaluated using a Chi-square test for trend.
Of the 150 women included in the investigation, 61% were nulliparous at the time of their first documented delivery at the study hospital, 72% had undergone a polysomnogram within 3 years of their delivery, and 86.7% were overweight or obese (defined as a body mass index of 25 kg/m
An analysis of the findings demonstrated a significant association between SDB and adverse pregnancy outcome.
“The incidence of adverse pregnancy outcomes was highest in women with severe sleep apnea,” she said, noting that the increased prevalence was principally driven by a higher incidence of gestational diabetes and early preterm birth.
In the no, mild, and moderate to severe SDB groups, respectively, researchers found the following:
▸ The composite adverse pregnancy outcome rates were 18.1%, 23.5%, and 38.5%.
▸ The gestational diabetes rates were 0%, 5.9%, and 11.5%.
▸ The preterm birth rates were 4.7%, 5.9%, and 15.4%.
▸ The pregnancy-induced hypertension rates were 16.9%, 17.6%, and 15.4%.
“Gestational diabetes has been independently associated with maternal obesity, as has preterm birth and low birth weight,” Dr. Facco said in an interview. “In this population, nearly 87% of the women who had [SDB] were also obese, making it an obvious confounding factor.”
Further prospective studies are needed to assess the independent impact of SDB on maternal and neonatal health, and if the independent association is confirmed, additional studies on the role of treatment in pregnancy would be needed, Dr. Facco said.
In this population, nearly 87% of the women who had sleep disordered breathing were also obese, making it an obvious confounding factor.
Source Catherine Harrell/Elsevier Global Medical News
Major Finding: In the no, mild, and moderate to severe sleep disordered breathing groups, the composite adverse pregnancy outcome rates were 18.1%, 23.5%, and 38.5%, respectively.
Data Source: A retrospective cohort study to assess the association between sleep-disordered breathing and adverse pregnancy outcomes in 150 women who had a delivery and an in-laboratory polysomnogram between January 2000 and June 2009.
Disclosures: Dr. Facco said she had no relevant financial disclosures.
MINNEAPOLIS – Women with sleep-disordered breathing have an increased likelihood of adverse pregnancy outcomes, but it is unclear whether the heightened risk can be attributed primarily to the breathing disorder or to obesity, reported lead investigator Dr. Francesca L. Facco.
Sleep disordered breathing (SDB) occurs in approximately 2% of the female population and has been linked to cardiovascular and metabolic morbidities and mortality in nonpregnant populations, said Dr. Facco of the department of ob.gyn. at Northwestern University in Chicago. “There is some evidence that pregnancy may precipitate or exacerbate the condition, and that there may be a relationship between intrauterine fetal growth retardation and maternal preeclampsia.” Unfortunately, “few studies have examined the relationship between abnormal respiratory patterns or quality of ventilation during sleep in pregnancy and adverse obstetrical outcomes, which is what we sought to do in this investigation” she said at the meeting.
Toward this end, Dr. Facco and her colleagues conducted a retrospective cohort study, using ICD-9 codes to identify women who had a delivery and an in-laboratory polysomnogram at their institution between January 2000 and June 2009. They reviewed the medical charts of 150 patients and abstracted data on demographics, sleep study results, and pregnancy outcomes. “In women with more than one pregnancy, we looked at the first pregnancy with outcome information,” she explained.
The study's primary outcome was adverse pregnancy outcome, which was defined as pregnancy-induced hypertension, gestational diabetes, and early preterm birth (at or before 34 weeks' gestation), Dr. Facco said.
The apnea-hypopnea index (AHI) was used to classify the presence and degree of SDB, with an AHI of fewer than 5 breathing pauses per hour indicating no SDB, an AHI of 5-14.9 pauses per hour indicating mild to moderate SDB, and an AHI of 15 or more pauses per hour suggesting a severe condition, she said. The associations between SBD and adverse pregnancy outcomes were evaluated using a Chi-square test for trend.
Of the 150 women included in the investigation, 61% were nulliparous at the time of their first documented delivery at the study hospital, 72% had undergone a polysomnogram within 3 years of their delivery, and 86.7% were overweight or obese (defined as a body mass index of 25 kg/m
An analysis of the findings demonstrated a significant association between SDB and adverse pregnancy outcome.
“The incidence of adverse pregnancy outcomes was highest in women with severe sleep apnea,” she said, noting that the increased prevalence was principally driven by a higher incidence of gestational diabetes and early preterm birth.
In the no, mild, and moderate to severe SDB groups, respectively, researchers found the following:
▸ The composite adverse pregnancy outcome rates were 18.1%, 23.5%, and 38.5%.
▸ The gestational diabetes rates were 0%, 5.9%, and 11.5%.
▸ The preterm birth rates were 4.7%, 5.9%, and 15.4%.
▸ The pregnancy-induced hypertension rates were 16.9%, 17.6%, and 15.4%.
“Gestational diabetes has been independently associated with maternal obesity, as has preterm birth and low birth weight,” Dr. Facco said in an interview. “In this population, nearly 87% of the women who had [SDB] were also obese, making it an obvious confounding factor.”
Further prospective studies are needed to assess the independent impact of SDB on maternal and neonatal health, and if the independent association is confirmed, additional studies on the role of treatment in pregnancy would be needed, Dr. Facco said.
In this population, nearly 87% of the women who had sleep disordered breathing were also obese, making it an obvious confounding factor.
Source Catherine Harrell/Elsevier Global Medical News
From the Annual Meeting of the Associated Professional Sleep Societies.
Sleep Debt Takes Toll on Health, Relationships
MINNEAPOLIS – Sleep is in short supply, thanks to our “24-hour society” in which trading sleep for work or play is commonplace and sleep deprivation is worn as a badge of honor, according to Dr. Michel Cramer Bornemann, codirector of the Minnesota Regional Sleep Disorders Center at Hennepin County Medical Center in Minneapolis.
Not only have we become accustomed to trading sleep for work, Dr. Cramer Bornemann said at the meeting “lack of sleep is synonymous with hard work or achievement, when really it can impede both.” In fact, the effects of insufficient shut-eye extend across multiple domains, according to a collection of independent studies presented at this year's meeting. For example, sleep loss was linked to the development or exacerbation of symptoms of ADHD in early childhood, an individual's genetic risk of obesity, inhibitory response to images of high-calorie foods, and even marital discontent.
ADHD and Sleep Loss
In a study designed to tease out the complex relationship between sleep problems – particularly falling asleep and staying asleep – and the development or worsening of inattention and hyperactivity and impulsivity in children and adolescents diagnosed with ADHD, Erika Gaylor, Ph.D., of SRI International in Menlo Park, Calif., and her colleagues analyzed data from the preschool and kindergarten waves of the Early Childhood Longitudinal Study–Birth Cohort. The cohort comprises a representative sample of approximately 6,860 children and their families living the United States.
The investigators calculated total nighttime sleep duration based on parent-reported bedtime and wake time. “We performed two sets of regression analyses to identify whether sleep duration in preschool-age children predicts attention and hyperactivity at kindergarten entry and [whether] attention and hyperactivity symptoms at preschool predict sleep duration at kindergarten,” she explained.
Controlling for the outcome of interest at the preschool time point, sex, ethnicity, and family income, researchers found that less sleep at preschool significantly predicted worse scores on parent-reported hyperactivity and attention at kindergarten, whereas parent-reported hyperactivity and attention at preschool did not predict sleep duration at kindergarten, Dr. Gaylor stated.
The results extend those of a previous study in which she and her colleagues determined that having a consistent bedtime was the most reliable predictor of positive developmental outcomes by age 4 years, she noted.
The Obesity Link
In a twin study designed to look more closely at the previously reported link between short sleep duration and elevated body mass index, Dr. Nathaniel Watson of the University of Washington in Seattle and his colleagues determined that short sleep may potentiate an underlying genetic mechanism for obesity.
The investigators examined whether sleep duration modified genetic and environmental influences on BMI in 1,811 pairs of twins drawn from the population-based University of Washington Twin Registry. The mean age of the study participants was 36.6 years. The participants provided self-reported information on height and weight, which was used to calculate BMI, as well as on habitual sleep duration, Dr. Watson said. The mean BMI of the group was 25.4 kg/m
Using behavioral genetic interaction models, the investigators found significant relationships between habitual sleep duration and genetic and shared environmental influences on BMI. Specifically, longer sleep duration was associated with decreased BMI, Dr. Watson reported. “When sleep duration was 7 hours, the heritability of BMI was more than double [70%] that observed when sleep duration was 9 hours [33%],” he said, noting that “there appears to be something about short sleep that creates a permissive environment for expression of obesity-related genes.”
The findings are an important addition to the existing body of research on the relationship between sleep duration and BMI, Dr. Watson said.
A connection between sleepiness and lack of self-control with respect to dietary choices may also contribute to the sleep loss/obesity equation, according to a study presented by William Killgore, Ph.D., of Harvard Medical School in Boston.
To test their hypothesis that greater daytime sleepiness correlates with reduced prefrontal cortex response during passive viewing of images of high-calorie foods, Dr. Killgore and his colleagues analyzed the functional MRI scans of 12 healthy adults obtained while they were shown pictures of high-calorie foods, low-calorie foods, and control images of plants and rocks. The researchers correlated the fMRI findings with subjects' self-reported daytime sleepiness, assessed via the Epworth Sleepiness Scale (ESS).
“Greater ESS scores correlated with reduced activation in the dorsolateral prefrontal cortex when high-calorie vs. low-calorie food images were perceived,” Dr. Killgore said, noting that this region is typically implicated in attention and inhibitory processing. Greater daytime sleepiness was also associated with increased activation in the right parietal and inferior temporal cortex, he said.
The findings suggest the possibility that sleepiness may affect an individual's inhibitory control when he or she is exposed to highly appetizing, high-calorie foods, according to Dr. Killgore.
Marital Discord
Although most sleep research focuses on the individual, the fact that sleep problems and relationship trouble often co-occur led Wendy M. Troxel, Ph.D., of the University of Pittsburgh, and her colleagues to consider the dyadic nature of sleep in a recent study. The investigators examined the bidirectional links between nightly sleep and daily marital interactions among 35 healthy married couples (mean age, 32 years) by correlating the actigraph results for sleep latency, wakefulness after sleep onset, and total sleep time of each partner over 10 nights, with daily self-reported positive and negative marital interactions assessed via electronic diaries during the same period.
“We found stronger evidence linking sleep to the next day's marital interactions, rather than the reverse direction,” Dr. Troxel reported. Specifically, wives' prolonged sleep latency significantly predicted their own and their husbands' reports of more negative and less positive interactions the next day, even after adjustment for depressive symptoms, whereas the quality of marital interactions did not appear to predict sleep measures in women, she said. The sleep quality of husbands did not appear to affect their own or their wives' reports of next-day marital interactions; however, for men, a higher level of positive marital interactions predicted shorter total sleep duration the next night.
The findings suggest, perhaps, that “men are more likely to repress their feelings or not be as aware” of mood changes, whereas women are more likely to express their emotional concerns and to “drive the emotional climate of the relationship,” Dr. Troxel said. The results highlight the potential interpersonal consequences of sleep disorders, and as such may have important clinical implications, she said.
The presenters reported no financial conflicts of interest relevant to their respective presentations.
In our “24-hour society:” trading sleep for work is commonplace and is worn as a badge of honor.
Source ©Nozomi Stall/Fotolia.com
MINNEAPOLIS – Sleep is in short supply, thanks to our “24-hour society” in which trading sleep for work or play is commonplace and sleep deprivation is worn as a badge of honor, according to Dr. Michel Cramer Bornemann, codirector of the Minnesota Regional Sleep Disorders Center at Hennepin County Medical Center in Minneapolis.
Not only have we become accustomed to trading sleep for work, Dr. Cramer Bornemann said at the meeting “lack of sleep is synonymous with hard work or achievement, when really it can impede both.” In fact, the effects of insufficient shut-eye extend across multiple domains, according to a collection of independent studies presented at this year's meeting. For example, sleep loss was linked to the development or exacerbation of symptoms of ADHD in early childhood, an individual's genetic risk of obesity, inhibitory response to images of high-calorie foods, and even marital discontent.
ADHD and Sleep Loss
In a study designed to tease out the complex relationship between sleep problems – particularly falling asleep and staying asleep – and the development or worsening of inattention and hyperactivity and impulsivity in children and adolescents diagnosed with ADHD, Erika Gaylor, Ph.D., of SRI International in Menlo Park, Calif., and her colleagues analyzed data from the preschool and kindergarten waves of the Early Childhood Longitudinal Study–Birth Cohort. The cohort comprises a representative sample of approximately 6,860 children and their families living the United States.
The investigators calculated total nighttime sleep duration based on parent-reported bedtime and wake time. “We performed two sets of regression analyses to identify whether sleep duration in preschool-age children predicts attention and hyperactivity at kindergarten entry and [whether] attention and hyperactivity symptoms at preschool predict sleep duration at kindergarten,” she explained.
Controlling for the outcome of interest at the preschool time point, sex, ethnicity, and family income, researchers found that less sleep at preschool significantly predicted worse scores on parent-reported hyperactivity and attention at kindergarten, whereas parent-reported hyperactivity and attention at preschool did not predict sleep duration at kindergarten, Dr. Gaylor stated.
The results extend those of a previous study in which she and her colleagues determined that having a consistent bedtime was the most reliable predictor of positive developmental outcomes by age 4 years, she noted.
The Obesity Link
In a twin study designed to look more closely at the previously reported link between short sleep duration and elevated body mass index, Dr. Nathaniel Watson of the University of Washington in Seattle and his colleagues determined that short sleep may potentiate an underlying genetic mechanism for obesity.
The investigators examined whether sleep duration modified genetic and environmental influences on BMI in 1,811 pairs of twins drawn from the population-based University of Washington Twin Registry. The mean age of the study participants was 36.6 years. The participants provided self-reported information on height and weight, which was used to calculate BMI, as well as on habitual sleep duration, Dr. Watson said. The mean BMI of the group was 25.4 kg/m
Using behavioral genetic interaction models, the investigators found significant relationships between habitual sleep duration and genetic and shared environmental influences on BMI. Specifically, longer sleep duration was associated with decreased BMI, Dr. Watson reported. “When sleep duration was 7 hours, the heritability of BMI was more than double [70%] that observed when sleep duration was 9 hours [33%],” he said, noting that “there appears to be something about short sleep that creates a permissive environment for expression of obesity-related genes.”
The findings are an important addition to the existing body of research on the relationship between sleep duration and BMI, Dr. Watson said.
A connection between sleepiness and lack of self-control with respect to dietary choices may also contribute to the sleep loss/obesity equation, according to a study presented by William Killgore, Ph.D., of Harvard Medical School in Boston.
To test their hypothesis that greater daytime sleepiness correlates with reduced prefrontal cortex response during passive viewing of images of high-calorie foods, Dr. Killgore and his colleagues analyzed the functional MRI scans of 12 healthy adults obtained while they were shown pictures of high-calorie foods, low-calorie foods, and control images of plants and rocks. The researchers correlated the fMRI findings with subjects' self-reported daytime sleepiness, assessed via the Epworth Sleepiness Scale (ESS).
“Greater ESS scores correlated with reduced activation in the dorsolateral prefrontal cortex when high-calorie vs. low-calorie food images were perceived,” Dr. Killgore said, noting that this region is typically implicated in attention and inhibitory processing. Greater daytime sleepiness was also associated with increased activation in the right parietal and inferior temporal cortex, he said.
The findings suggest the possibility that sleepiness may affect an individual's inhibitory control when he or she is exposed to highly appetizing, high-calorie foods, according to Dr. Killgore.
Marital Discord
Although most sleep research focuses on the individual, the fact that sleep problems and relationship trouble often co-occur led Wendy M. Troxel, Ph.D., of the University of Pittsburgh, and her colleagues to consider the dyadic nature of sleep in a recent study. The investigators examined the bidirectional links between nightly sleep and daily marital interactions among 35 healthy married couples (mean age, 32 years) by correlating the actigraph results for sleep latency, wakefulness after sleep onset, and total sleep time of each partner over 10 nights, with daily self-reported positive and negative marital interactions assessed via electronic diaries during the same period.
“We found stronger evidence linking sleep to the next day's marital interactions, rather than the reverse direction,” Dr. Troxel reported. Specifically, wives' prolonged sleep latency significantly predicted their own and their husbands' reports of more negative and less positive interactions the next day, even after adjustment for depressive symptoms, whereas the quality of marital interactions did not appear to predict sleep measures in women, she said. The sleep quality of husbands did not appear to affect their own or their wives' reports of next-day marital interactions; however, for men, a higher level of positive marital interactions predicted shorter total sleep duration the next night.
The findings suggest, perhaps, that “men are more likely to repress their feelings or not be as aware” of mood changes, whereas women are more likely to express their emotional concerns and to “drive the emotional climate of the relationship,” Dr. Troxel said. The results highlight the potential interpersonal consequences of sleep disorders, and as such may have important clinical implications, she said.
The presenters reported no financial conflicts of interest relevant to their respective presentations.
In our “24-hour society:” trading sleep for work is commonplace and is worn as a badge of honor.
Source ©Nozomi Stall/Fotolia.com
MINNEAPOLIS – Sleep is in short supply, thanks to our “24-hour society” in which trading sleep for work or play is commonplace and sleep deprivation is worn as a badge of honor, according to Dr. Michel Cramer Bornemann, codirector of the Minnesota Regional Sleep Disorders Center at Hennepin County Medical Center in Minneapolis.
Not only have we become accustomed to trading sleep for work, Dr. Cramer Bornemann said at the meeting “lack of sleep is synonymous with hard work or achievement, when really it can impede both.” In fact, the effects of insufficient shut-eye extend across multiple domains, according to a collection of independent studies presented at this year's meeting. For example, sleep loss was linked to the development or exacerbation of symptoms of ADHD in early childhood, an individual's genetic risk of obesity, inhibitory response to images of high-calorie foods, and even marital discontent.
ADHD and Sleep Loss
In a study designed to tease out the complex relationship between sleep problems – particularly falling asleep and staying asleep – and the development or worsening of inattention and hyperactivity and impulsivity in children and adolescents diagnosed with ADHD, Erika Gaylor, Ph.D., of SRI International in Menlo Park, Calif., and her colleagues analyzed data from the preschool and kindergarten waves of the Early Childhood Longitudinal Study–Birth Cohort. The cohort comprises a representative sample of approximately 6,860 children and their families living the United States.
The investigators calculated total nighttime sleep duration based on parent-reported bedtime and wake time. “We performed two sets of regression analyses to identify whether sleep duration in preschool-age children predicts attention and hyperactivity at kindergarten entry and [whether] attention and hyperactivity symptoms at preschool predict sleep duration at kindergarten,” she explained.
Controlling for the outcome of interest at the preschool time point, sex, ethnicity, and family income, researchers found that less sleep at preschool significantly predicted worse scores on parent-reported hyperactivity and attention at kindergarten, whereas parent-reported hyperactivity and attention at preschool did not predict sleep duration at kindergarten, Dr. Gaylor stated.
The results extend those of a previous study in which she and her colleagues determined that having a consistent bedtime was the most reliable predictor of positive developmental outcomes by age 4 years, she noted.
The Obesity Link
In a twin study designed to look more closely at the previously reported link between short sleep duration and elevated body mass index, Dr. Nathaniel Watson of the University of Washington in Seattle and his colleagues determined that short sleep may potentiate an underlying genetic mechanism for obesity.
The investigators examined whether sleep duration modified genetic and environmental influences on BMI in 1,811 pairs of twins drawn from the population-based University of Washington Twin Registry. The mean age of the study participants was 36.6 years. The participants provided self-reported information on height and weight, which was used to calculate BMI, as well as on habitual sleep duration, Dr. Watson said. The mean BMI of the group was 25.4 kg/m
Using behavioral genetic interaction models, the investigators found significant relationships between habitual sleep duration and genetic and shared environmental influences on BMI. Specifically, longer sleep duration was associated with decreased BMI, Dr. Watson reported. “When sleep duration was 7 hours, the heritability of BMI was more than double [70%] that observed when sleep duration was 9 hours [33%],” he said, noting that “there appears to be something about short sleep that creates a permissive environment for expression of obesity-related genes.”
The findings are an important addition to the existing body of research on the relationship between sleep duration and BMI, Dr. Watson said.
A connection between sleepiness and lack of self-control with respect to dietary choices may also contribute to the sleep loss/obesity equation, according to a study presented by William Killgore, Ph.D., of Harvard Medical School in Boston.
To test their hypothesis that greater daytime sleepiness correlates with reduced prefrontal cortex response during passive viewing of images of high-calorie foods, Dr. Killgore and his colleagues analyzed the functional MRI scans of 12 healthy adults obtained while they were shown pictures of high-calorie foods, low-calorie foods, and control images of plants and rocks. The researchers correlated the fMRI findings with subjects' self-reported daytime sleepiness, assessed via the Epworth Sleepiness Scale (ESS).
“Greater ESS scores correlated with reduced activation in the dorsolateral prefrontal cortex when high-calorie vs. low-calorie food images were perceived,” Dr. Killgore said, noting that this region is typically implicated in attention and inhibitory processing. Greater daytime sleepiness was also associated with increased activation in the right parietal and inferior temporal cortex, he said.
The findings suggest the possibility that sleepiness may affect an individual's inhibitory control when he or she is exposed to highly appetizing, high-calorie foods, according to Dr. Killgore.
Marital Discord
Although most sleep research focuses on the individual, the fact that sleep problems and relationship trouble often co-occur led Wendy M. Troxel, Ph.D., of the University of Pittsburgh, and her colleagues to consider the dyadic nature of sleep in a recent study. The investigators examined the bidirectional links between nightly sleep and daily marital interactions among 35 healthy married couples (mean age, 32 years) by correlating the actigraph results for sleep latency, wakefulness after sleep onset, and total sleep time of each partner over 10 nights, with daily self-reported positive and negative marital interactions assessed via electronic diaries during the same period.
“We found stronger evidence linking sleep to the next day's marital interactions, rather than the reverse direction,” Dr. Troxel reported. Specifically, wives' prolonged sleep latency significantly predicted their own and their husbands' reports of more negative and less positive interactions the next day, even after adjustment for depressive symptoms, whereas the quality of marital interactions did not appear to predict sleep measures in women, she said. The sleep quality of husbands did not appear to affect their own or their wives' reports of next-day marital interactions; however, for men, a higher level of positive marital interactions predicted shorter total sleep duration the next night.
The findings suggest, perhaps, that “men are more likely to repress their feelings or not be as aware” of mood changes, whereas women are more likely to express their emotional concerns and to “drive the emotional climate of the relationship,” Dr. Troxel said. The results highlight the potential interpersonal consequences of sleep disorders, and as such may have important clinical implications, she said.
The presenters reported no financial conflicts of interest relevant to their respective presentations.
In our “24-hour society:” trading sleep for work is commonplace and is worn as a badge of honor.
Source ©Nozomi Stall/Fotolia.com
Genetic Discovery Shows Pathway of ESRD : Variants in APOL1 gene may explain fourfold higher rate of nondiabetic ESRD in African Americans
BOSTON – The recent identification of two gene mutations in a cohort of African Americans with nondiabetic kidney disease helps explain the disproportionately higher rates of kidney disease in this population and represents a disease-mechanism pathway that could lead to new treatments and possibly a cure, Dr. David J. Friedman said at the meeting.
Dr. Friedman of Beth-Israel Deaconess Medical Center, Boston, and his colleagues recently reported the association between two independent variants in the apolipoprotein L1 (APOL1) gene on chromosome 22 and focal segmental glomerulosclerosis (FSGS) and hypertension-attributed end-stage kidney disease in blacks (Science 2010;329:841-5). Not only do the investigators believe that APOL1 is very important to the understanding of nondiabetic renal disease in blacks, “we think the variants in the gene are among the most powerful that have been discovered to date,” Dr. Friedman stressed.
The disparity between the rates of end-stage renal disease (ESRD) in blacks and whites in the United States is “incredible,” Dr. Friedman stated, noting that the incidence rate is four to five times higher in blacks, according to the 2010 United States Renal Data System annual report. “People have been debating for decades whether the major cause of this disparity is genes or environment. No doubt both are important, but given how strongly this phenotype travels in families, I think we can say with certainty that genes play an important role.”
The APOL1 discovery came on the heels of an earlier association linking FSGS, nondiabetic ESRD, and HIV nephropathy in blacks with the MYH9 gene located on the same chromosome, Dr. Friedman explained. “This was quite striking, because we used to think of the three conditions as entirely different diseases, yet each one had exactly the same locus.”
Despite the strong association and several years spent looking for causal mutations using fine mapping sequences, the causal variants remained elusive until Dr. Friedman and his colleagues approached the problem from a different perspective. “We asked, 'How could any disease gene that's this deleterious become so common in a population?' We assumed there was something in this [genetic region] that was beneficial once upon a time to human evolution in Africa,” he said. Using mathematical techniques, “we realized that because of the effects of natural selection, the disease gene interval was much larger than anyone thought and probably contained at least five genes.” Consequently, the investigators tested new variants in other genes for association with renal disease in African Americans, looking specifically for variants that had not yet been documented, he said.
In a cohort of 205 African Americans with biopsy-proven FSGS and no family history of the disease and 180 African American control subjects, “we saw that variants in the neighboring APOL1 gene were much more strongly associated with renal disease, and unlike the MYH9 variants, which were located in regions of the gene that did not encode for protein, the APOL1 variants were protein-coding sequences.” The investigators determined that the top two variants almost always co-occurred on the same chromosome and each changed an amino acid somewhere on the protein. “We called this the g1 risk allele, and when we controlled for it, a new variant popped up, which we called the g2 allele,” he said. Controlling for both the g1 and g2 alleles, “the entire association of this region disappeared and there was no signal left for MYH9.”
The investigators also tested the genetic variants in hypertension-associated ESRD in a larger cohort of 1,030 African Americans with the disease and 1,025 geographically matched control subjects and found that the same two variants had a tremendous impact on the development of the disease. “When combined together, the P value was on the order of 10 to the minus 60, or 35 orders of magnitude greater than the very best MYH9 [result],” Dr. Friedman said. Surprisingly, he noted, we found that these disease variants follow a recessive pattern and together the odds ratio was on the order of 7-10, while the very largest effect sizes of the common variants that affect hypertension or diabetes will confer odds ratios of about 1.4-1.5.”
The APOL1 gene and these variants “tend to fall into a different category that we've all been familiar with in the past,” Dr. Friedman explained. “Most disease variants are either very rare with powerful effects or common with relatively modest effects. The APOL1 variants have a surprising combination of effect size and frequency such that 50%-60% of African Americans carry g1 and/or g2 risk alleles, and 50% are risk homozygous, meaning they are in the highest risk for kidney disease: That translates into about 3.5 million individuals.” Further, while the odds ratios for the more common forms of nondiabetic kidney disease in this population range from 7 to 10, “we're starting to see odds ratios in the range of 30 for diseases like HIV nephropathy.”
To determine how much of nondiabetic kidney disease can be explained by the genetic variants, the investigators reviewed data from the prospective population-based Dallas Heart Study and compared the outcomes of European American and Caucasian patients, in whom the renal risk alleles are essentially nonexistent, with those of African Americans with zero or one risk allele and those with both risk alleles. Looking at urine protein levels, an indicator of renal microvascular disease, “we found that black individuals with zero or one copy of the risk allele had rates more similar to whites than to blacks with two alleles,” Dr. Friedman reported.
The results were even more striking for actual hard measures of renal function, he said. “Rates of chronic kidney disease or impaired renal function, indicated by glomerular filtration rates less than 60 mL/min per 1.73 m
In their preliminary review of the data, they couldn't “tell any difference between African Americans with zero or one copy of the allele and Caucasians, but African Americans with two renal risk alleles have at least 10-fold increase in kidney failure,” Dr. Friedman stated. “To our surprise, this really only applies to nondiabetic kidney disease. The alleles have essentially no effect that we can detect on diabetic renal disease.”
This realization led the investigators to revisit the issue of natural selection. It turns out, according to Dr. Friedman, “APOL1 is the genetic source for the immunity factor that protected people from African sleeping sickness, a parasitic infection caused by Trypanosoma brucei gambiense.” Similar to selection for the gene variants associated with sickle cell anemia, he explained, “inheriting one copy of the APOL1 gene risk variant provides protection from the parasite, while having two copies seems to increase the risk of kidney disease up to 10-fold.” Through natural selection, as more people survived African sleeping sickness, the percentage of the population with kidney disease risk variants increased, he said.
The investigators are currently studying the risk variants intensively to figure out how they work. “We think they may differentially regulate processes such as apoptosis, and cell repair may function as a chloride channel in mammalian systems in the same way it does in lysosomes and may affect biological function,” Dr. Friedman hypothesized.
In addition to exploring the underlying mechanisms, the potential clinical value of the genetic discovery is also being considered. “This may help us improve risk stratification,” Dr. Friedman said. “It's one thing to say that African Americans have a fourfold increased risk of kidney disease. It's better to find the tag SNP [single nucleotide polymorphism] that will tell if an individual might have an increased risk. If you can actually find the causal variant, then you can potentially predict with much higher success who is and is not at risk for kidney failure,” he stated. “The problem is that it works pretty well in Western African populations, such as Nigerians, but not as well in East Africans, such as Ethiopians.”
One of the main questions that Dr. Friedman and his colleagues currently are pursuing is whether hypertension causes kidney disease in these at-risk individuals or whether hypertension is the result of primary renal vascular disease. “To us, the fact that the very same genetic variants cause hypertension-associated ESRD and FSGS, a primary renal microvascular disease, suggests that these may be the same disease process that we are either catching at different stages or that have different modifiers, and that hypertension in these patients may just be a symptom and not a cause of kidney failure,” Dr. Friedman said.
A cure for nondiabetic kidney disease, which accounts for more than $8.2 billion annually in dialysis coasts, may directly result from the APOL1 finding, Dr. Friedman said. “It's that important.”
Dr. Friedman reported no financial conflicts of interested related to his presentation.
BOSTON – The recent identification of two gene mutations in a cohort of African Americans with nondiabetic kidney disease helps explain the disproportionately higher rates of kidney disease in this population and represents a disease-mechanism pathway that could lead to new treatments and possibly a cure, Dr. David J. Friedman said at the meeting.
Dr. Friedman of Beth-Israel Deaconess Medical Center, Boston, and his colleagues recently reported the association between two independent variants in the apolipoprotein L1 (APOL1) gene on chromosome 22 and focal segmental glomerulosclerosis (FSGS) and hypertension-attributed end-stage kidney disease in blacks (Science 2010;329:841-5). Not only do the investigators believe that APOL1 is very important to the understanding of nondiabetic renal disease in blacks, “we think the variants in the gene are among the most powerful that have been discovered to date,” Dr. Friedman stressed.
The disparity between the rates of end-stage renal disease (ESRD) in blacks and whites in the United States is “incredible,” Dr. Friedman stated, noting that the incidence rate is four to five times higher in blacks, according to the 2010 United States Renal Data System annual report. “People have been debating for decades whether the major cause of this disparity is genes or environment. No doubt both are important, but given how strongly this phenotype travels in families, I think we can say with certainty that genes play an important role.”
The APOL1 discovery came on the heels of an earlier association linking FSGS, nondiabetic ESRD, and HIV nephropathy in blacks with the MYH9 gene located on the same chromosome, Dr. Friedman explained. “This was quite striking, because we used to think of the three conditions as entirely different diseases, yet each one had exactly the same locus.”
Despite the strong association and several years spent looking for causal mutations using fine mapping sequences, the causal variants remained elusive until Dr. Friedman and his colleagues approached the problem from a different perspective. “We asked, 'How could any disease gene that's this deleterious become so common in a population?' We assumed there was something in this [genetic region] that was beneficial once upon a time to human evolution in Africa,” he said. Using mathematical techniques, “we realized that because of the effects of natural selection, the disease gene interval was much larger than anyone thought and probably contained at least five genes.” Consequently, the investigators tested new variants in other genes for association with renal disease in African Americans, looking specifically for variants that had not yet been documented, he said.
In a cohort of 205 African Americans with biopsy-proven FSGS and no family history of the disease and 180 African American control subjects, “we saw that variants in the neighboring APOL1 gene were much more strongly associated with renal disease, and unlike the MYH9 variants, which were located in regions of the gene that did not encode for protein, the APOL1 variants were protein-coding sequences.” The investigators determined that the top two variants almost always co-occurred on the same chromosome and each changed an amino acid somewhere on the protein. “We called this the g1 risk allele, and when we controlled for it, a new variant popped up, which we called the g2 allele,” he said. Controlling for both the g1 and g2 alleles, “the entire association of this region disappeared and there was no signal left for MYH9.”
The investigators also tested the genetic variants in hypertension-associated ESRD in a larger cohort of 1,030 African Americans with the disease and 1,025 geographically matched control subjects and found that the same two variants had a tremendous impact on the development of the disease. “When combined together, the P value was on the order of 10 to the minus 60, or 35 orders of magnitude greater than the very best MYH9 [result],” Dr. Friedman said. Surprisingly, he noted, we found that these disease variants follow a recessive pattern and together the odds ratio was on the order of 7-10, while the very largest effect sizes of the common variants that affect hypertension or diabetes will confer odds ratios of about 1.4-1.5.”
The APOL1 gene and these variants “tend to fall into a different category that we've all been familiar with in the past,” Dr. Friedman explained. “Most disease variants are either very rare with powerful effects or common with relatively modest effects. The APOL1 variants have a surprising combination of effect size and frequency such that 50%-60% of African Americans carry g1 and/or g2 risk alleles, and 50% are risk homozygous, meaning they are in the highest risk for kidney disease: That translates into about 3.5 million individuals.” Further, while the odds ratios for the more common forms of nondiabetic kidney disease in this population range from 7 to 10, “we're starting to see odds ratios in the range of 30 for diseases like HIV nephropathy.”
To determine how much of nondiabetic kidney disease can be explained by the genetic variants, the investigators reviewed data from the prospective population-based Dallas Heart Study and compared the outcomes of European American and Caucasian patients, in whom the renal risk alleles are essentially nonexistent, with those of African Americans with zero or one risk allele and those with both risk alleles. Looking at urine protein levels, an indicator of renal microvascular disease, “we found that black individuals with zero or one copy of the risk allele had rates more similar to whites than to blacks with two alleles,” Dr. Friedman reported.
The results were even more striking for actual hard measures of renal function, he said. “Rates of chronic kidney disease or impaired renal function, indicated by glomerular filtration rates less than 60 mL/min per 1.73 m
In their preliminary review of the data, they couldn't “tell any difference between African Americans with zero or one copy of the allele and Caucasians, but African Americans with two renal risk alleles have at least 10-fold increase in kidney failure,” Dr. Friedman stated. “To our surprise, this really only applies to nondiabetic kidney disease. The alleles have essentially no effect that we can detect on diabetic renal disease.”
This realization led the investigators to revisit the issue of natural selection. It turns out, according to Dr. Friedman, “APOL1 is the genetic source for the immunity factor that protected people from African sleeping sickness, a parasitic infection caused by Trypanosoma brucei gambiense.” Similar to selection for the gene variants associated with sickle cell anemia, he explained, “inheriting one copy of the APOL1 gene risk variant provides protection from the parasite, while having two copies seems to increase the risk of kidney disease up to 10-fold.” Through natural selection, as more people survived African sleeping sickness, the percentage of the population with kidney disease risk variants increased, he said.
The investigators are currently studying the risk variants intensively to figure out how they work. “We think they may differentially regulate processes such as apoptosis, and cell repair may function as a chloride channel in mammalian systems in the same way it does in lysosomes and may affect biological function,” Dr. Friedman hypothesized.
In addition to exploring the underlying mechanisms, the potential clinical value of the genetic discovery is also being considered. “This may help us improve risk stratification,” Dr. Friedman said. “It's one thing to say that African Americans have a fourfold increased risk of kidney disease. It's better to find the tag SNP [single nucleotide polymorphism] that will tell if an individual might have an increased risk. If you can actually find the causal variant, then you can potentially predict with much higher success who is and is not at risk for kidney failure,” he stated. “The problem is that it works pretty well in Western African populations, such as Nigerians, but not as well in East Africans, such as Ethiopians.”
One of the main questions that Dr. Friedman and his colleagues currently are pursuing is whether hypertension causes kidney disease in these at-risk individuals or whether hypertension is the result of primary renal vascular disease. “To us, the fact that the very same genetic variants cause hypertension-associated ESRD and FSGS, a primary renal microvascular disease, suggests that these may be the same disease process that we are either catching at different stages or that have different modifiers, and that hypertension in these patients may just be a symptom and not a cause of kidney failure,” Dr. Friedman said.
A cure for nondiabetic kidney disease, which accounts for more than $8.2 billion annually in dialysis coasts, may directly result from the APOL1 finding, Dr. Friedman said. “It's that important.”
Dr. Friedman reported no financial conflicts of interested related to his presentation.
BOSTON – The recent identification of two gene mutations in a cohort of African Americans with nondiabetic kidney disease helps explain the disproportionately higher rates of kidney disease in this population and represents a disease-mechanism pathway that could lead to new treatments and possibly a cure, Dr. David J. Friedman said at the meeting.
Dr. Friedman of Beth-Israel Deaconess Medical Center, Boston, and his colleagues recently reported the association between two independent variants in the apolipoprotein L1 (APOL1) gene on chromosome 22 and focal segmental glomerulosclerosis (FSGS) and hypertension-attributed end-stage kidney disease in blacks (Science 2010;329:841-5). Not only do the investigators believe that APOL1 is very important to the understanding of nondiabetic renal disease in blacks, “we think the variants in the gene are among the most powerful that have been discovered to date,” Dr. Friedman stressed.
The disparity between the rates of end-stage renal disease (ESRD) in blacks and whites in the United States is “incredible,” Dr. Friedman stated, noting that the incidence rate is four to five times higher in blacks, according to the 2010 United States Renal Data System annual report. “People have been debating for decades whether the major cause of this disparity is genes or environment. No doubt both are important, but given how strongly this phenotype travels in families, I think we can say with certainty that genes play an important role.”
The APOL1 discovery came on the heels of an earlier association linking FSGS, nondiabetic ESRD, and HIV nephropathy in blacks with the MYH9 gene located on the same chromosome, Dr. Friedman explained. “This was quite striking, because we used to think of the three conditions as entirely different diseases, yet each one had exactly the same locus.”
Despite the strong association and several years spent looking for causal mutations using fine mapping sequences, the causal variants remained elusive until Dr. Friedman and his colleagues approached the problem from a different perspective. “We asked, 'How could any disease gene that's this deleterious become so common in a population?' We assumed there was something in this [genetic region] that was beneficial once upon a time to human evolution in Africa,” he said. Using mathematical techniques, “we realized that because of the effects of natural selection, the disease gene interval was much larger than anyone thought and probably contained at least five genes.” Consequently, the investigators tested new variants in other genes for association with renal disease in African Americans, looking specifically for variants that had not yet been documented, he said.
In a cohort of 205 African Americans with biopsy-proven FSGS and no family history of the disease and 180 African American control subjects, “we saw that variants in the neighboring APOL1 gene were much more strongly associated with renal disease, and unlike the MYH9 variants, which were located in regions of the gene that did not encode for protein, the APOL1 variants were protein-coding sequences.” The investigators determined that the top two variants almost always co-occurred on the same chromosome and each changed an amino acid somewhere on the protein. “We called this the g1 risk allele, and when we controlled for it, a new variant popped up, which we called the g2 allele,” he said. Controlling for both the g1 and g2 alleles, “the entire association of this region disappeared and there was no signal left for MYH9.”
The investigators also tested the genetic variants in hypertension-associated ESRD in a larger cohort of 1,030 African Americans with the disease and 1,025 geographically matched control subjects and found that the same two variants had a tremendous impact on the development of the disease. “When combined together, the P value was on the order of 10 to the minus 60, or 35 orders of magnitude greater than the very best MYH9 [result],” Dr. Friedman said. Surprisingly, he noted, we found that these disease variants follow a recessive pattern and together the odds ratio was on the order of 7-10, while the very largest effect sizes of the common variants that affect hypertension or diabetes will confer odds ratios of about 1.4-1.5.”
The APOL1 gene and these variants “tend to fall into a different category that we've all been familiar with in the past,” Dr. Friedman explained. “Most disease variants are either very rare with powerful effects or common with relatively modest effects. The APOL1 variants have a surprising combination of effect size and frequency such that 50%-60% of African Americans carry g1 and/or g2 risk alleles, and 50% are risk homozygous, meaning they are in the highest risk for kidney disease: That translates into about 3.5 million individuals.” Further, while the odds ratios for the more common forms of nondiabetic kidney disease in this population range from 7 to 10, “we're starting to see odds ratios in the range of 30 for diseases like HIV nephropathy.”
To determine how much of nondiabetic kidney disease can be explained by the genetic variants, the investigators reviewed data from the prospective population-based Dallas Heart Study and compared the outcomes of European American and Caucasian patients, in whom the renal risk alleles are essentially nonexistent, with those of African Americans with zero or one risk allele and those with both risk alleles. Looking at urine protein levels, an indicator of renal microvascular disease, “we found that black individuals with zero or one copy of the risk allele had rates more similar to whites than to blacks with two alleles,” Dr. Friedman reported.
The results were even more striking for actual hard measures of renal function, he said. “Rates of chronic kidney disease or impaired renal function, indicated by glomerular filtration rates less than 60 mL/min per 1.73 m
In their preliminary review of the data, they couldn't “tell any difference between African Americans with zero or one copy of the allele and Caucasians, but African Americans with two renal risk alleles have at least 10-fold increase in kidney failure,” Dr. Friedman stated. “To our surprise, this really only applies to nondiabetic kidney disease. The alleles have essentially no effect that we can detect on diabetic renal disease.”
This realization led the investigators to revisit the issue of natural selection. It turns out, according to Dr. Friedman, “APOL1 is the genetic source for the immunity factor that protected people from African sleeping sickness, a parasitic infection caused by Trypanosoma brucei gambiense.” Similar to selection for the gene variants associated with sickle cell anemia, he explained, “inheriting one copy of the APOL1 gene risk variant provides protection from the parasite, while having two copies seems to increase the risk of kidney disease up to 10-fold.” Through natural selection, as more people survived African sleeping sickness, the percentage of the population with kidney disease risk variants increased, he said.
The investigators are currently studying the risk variants intensively to figure out how they work. “We think they may differentially regulate processes such as apoptosis, and cell repair may function as a chloride channel in mammalian systems in the same way it does in lysosomes and may affect biological function,” Dr. Friedman hypothesized.
In addition to exploring the underlying mechanisms, the potential clinical value of the genetic discovery is also being considered. “This may help us improve risk stratification,” Dr. Friedman said. “It's one thing to say that African Americans have a fourfold increased risk of kidney disease. It's better to find the tag SNP [single nucleotide polymorphism] that will tell if an individual might have an increased risk. If you can actually find the causal variant, then you can potentially predict with much higher success who is and is not at risk for kidney failure,” he stated. “The problem is that it works pretty well in Western African populations, such as Nigerians, but not as well in East Africans, such as Ethiopians.”
One of the main questions that Dr. Friedman and his colleagues currently are pursuing is whether hypertension causes kidney disease in these at-risk individuals or whether hypertension is the result of primary renal vascular disease. “To us, the fact that the very same genetic variants cause hypertension-associated ESRD and FSGS, a primary renal microvascular disease, suggests that these may be the same disease process that we are either catching at different stages or that have different modifiers, and that hypertension in these patients may just be a symptom and not a cause of kidney failure,” Dr. Friedman said.
A cure for nondiabetic kidney disease, which accounts for more than $8.2 billion annually in dialysis coasts, may directly result from the APOL1 finding, Dr. Friedman said. “It's that important.”
Dr. Friedman reported no financial conflicts of interested related to his presentation.
Lupus Nephritis: Many Unanswered Questions
Lupus nephritis is the most important complication of systemic lupus erythematosus because it is closely linked to survival and morbidity in patients with the autoimmune disease. It is also one of the most controversial, according to Dr. David R.W. Jayne, director of the Vasculitis and Lupus Clinic at Addenbrooke’s Hospital in Cambridge, England.
Specifically, recent data indicate that more than 40% of SLE patients who develop nephritis develop progressive kidney disease, and 20% die within 12 years, "which means that more than half of the [SLE] patients diagnosed with lupus nephritis reach a hard end point, let alone the other problems that are inherent to the disease," said Dr. Jayne, a nephrologist. Variations in disease presentation, histologic patterns, course, and outcomes complicate management, as does the absence of a single, accepted standard of care and well-defined treatment aims, he said, noting that "there is more uncertainty about how to treat [lupus nephritis] than any other subject within nephrology."
In this month’s column, Dr. Jayne will address some of the reasons behind this uncertainty and the current management options.
QUESTION: At the 2011 Annual European Congress of Rheumatology in London, you stressed that lupus nephritis is a controversial topic within rheumatology and nephrology, and joked that your presentations on the topic are the only forum "where people scream at me." What makes lupus nephritis such a hot-button topic?
Dr. Jayne: We’re dealing with young, often female patients with a potentially devastating disease for which we’ve really had poor evidence to base treatments, and that drives anxiety. Also, the treatment (such as high-dose steroids and cyclophosphamide) carries major toxic risks. The reality is that lupus nephritis is heterogenous and the pathology is complex. The current classification system divides the condition into six classes according to the severity of the lesions observed (Kidney Int. 2004;65:521-30). Most studies focus on proliferative nephritis (classes III and IV) and membranous nephritis (class V) because these are associated with an increased risk of kidney failure, yet are amenable to therapy. In reality, the pathology is more complex because of mixed membranous and proliferative lesions and other kidney glomerular and nonglomerular problems that can occur at the same time. These problems – including thrombotic microangiopathy, podocytopathy, tubulointerstitial nephritis, and vascular disease – are not reflected in the current classification, yet they have an impact on long-term outcomes.
QUESTION: Given the complexity of the disease, what is the optimal management course?
Dr. Jayne: In general, the treatment of lupus nephritis comprises a period of intensive immunosuppressive treatment followed by a period of less-intense immunosuppressive therapy. Opinions regarding optimal treatment vary widely. The reality is, we spend all of our time talking about which immunosuppressive to use and how much steroid to use. But in many ways, that’s one of the less-important aspects of managing the disease. It’s so multifaceted, and there are many other things that contribute to a good outcome. It’s the speed of diagnosis, referral, and initiation of treatment that actually drives improvements in outcome.
QUESTION: What are the current evidence-based treatment options?
Dr. Jayne: There is a shopping list of options. Among them are the National Institutes of Health "long" protocol, which consists of 15 g of pulse cyclophosphamide titrated over 2 years; the NIH "short" protocol, consisting of six pulses of cyclophosphamide, for a total of 7.5 g, followed by a switch to a safer immunosuppressive agent, such as azathioprine or mycophenolate mofetil; and the low-dose Euro-Lupus regimen, including six fixed-dose pulses of 500-mg cyclophosphamide, for a total of 3 g over 12 weeks, followed by azathioprine or mycophenolate.
The recent ALMS (Aspreva Lupus Management Study) compared mycophenolate mofetil vs. intravenous cyclophosphamide with the same dose of background steroids. An alternative option is starting with mycophenolate mofetil at 3 g/day, then stepping down at 6 months to 2 g/day in responders. The data suggest that mycophenolate mofetil is as effective as high-dose intravenous cyclophosphamide for inducing remission.
As maintenance treatment, mycophenolate mofetil appears to be superior to azathioprine, which itself is similarly effective to ciclosporin for preventing or reducing relapse risk (J. Am. Soc. Nephrol. 2009;20:1103-12).
QUESTION: What is the role of steroids in lupus nephritis treatment?
Dr. Jayne: That’s one of the big unanswered questions: what to do with steroids? Whenever we’re involved with trial design, the longest, most agonizing discussions focus on the steroid regimen. There is considerable variability in prednisone regimens, with beginning doses ranging from 0.5 mg/kg per day to 1 mg/kg per day, and tapering over a period of months to maintain control of nephritis and extrarenal symptoms. Studies have shown, not surprisingly, that the higher steroid doses are linked to higher response rates but also with an increased likelihood of severe adverse events. There is also no consensus about when to stop steroids. We generally taper down to 10-15 mg/day by 6 months and continue a lower steroid dose for 2 or more years, but there is also considerable variability in this as well.
Interestingly, we conducted a questionnaire study in which we asked 71 lupus experts whether they would stop steroids in a 21-year-old female with class IV nephritis who was in stable remission for 2 years, and there was an exact balance of opinion: One-third each would stop, wouldn’t stop, and didn’t know. When we looked for differences among the responders, we found that physicians in the United States and Canada had more enthusiasm for stopping steroids, while those in Europe prefer to keep them going. Also, by specialty, nephrologists were keener to stop them and rheumatologists were keener to keep going. In other words, there is no universal standard of care to answer this question.
QUESTION: What is the best measure for assessing treatment response?
Dr. Jayne: Typically, the criteria are proteinuria, urinary sediment, and renal function. The majority of patients will have normal renal function or near-normal renal function when they come to us, so the [glomerular filtration rate] is turning out not to be a useful marker for renal response. Yes, the loss of hematuria is useful and that’s part of a complete response definition, but really it’s the reduction of proteinuria that drives renal response definitions. So a reduction by 50% from baseline to subnephrotic levels (less than 3 g/day) is a partial response, and a complete response is down to less than 0.5 g/day.
Proteinuria means a lot of different things. One of the confusions in managing lupus nephritis is that proteinuria does not just reflect activity. You can switch off all of the activity in the kidney, but the proteinuria declines quite slowly. It takes a long time – up to 3 years – for the proteinuria to get as good as it’s going to get, but too many studies have short, 6-month end points. We really look at 2 years as being the induction period. Initially, proteinuria is reflecting disease activity, but subsequently reflects the recovery phase of the glomerulus. The immune complexes are being solubilized and removed. This is the remodeling phase, which lasts a long time. Then there is the fibrotic phase that contributes a relatively small amount to proteinuria.
For this reason, nephrologists would really love repeat renal biopsies. Several small studies have demonstrated that patients often have persistent disease activity even when proteinuria has gone down to relatively low levels. So even when the parameters we measure have gotten better, the activity may not be gone. A renal biopsy will tell you whether there has been a change in scarring or chronicity.
QUESTION: What’s on the horizon for the management of lupus nephritis?
Dr. Jayne: There are some new treatment directions, including tacrolimus (Prograf). This drug, which is widely used in the prevention of renal transplant rejections, also appears to have benefits for lupus nephritis. Tacrolimus has direct effects on the podocyte where it influences the cytoskeleton and the permeability of the glomerular basement membrane, as well as immunosuppressant effects, so it has a dual action in lupus nephritis, but we need more data.
The role of B-cell depletion has also been explored. Many physicians have been using rituximab (Rituxan) in their clinics for a number of years, and data from retrospective cohort studies suggest that it is effective for relapsing or refractory disease. However, the findings of double-blind, placebo-controlled trials of rituximab and another B-cell–depleting drug, ocrelizumab, when added on top of either mycophenolate mofetil or cyclophosphamide, suggested only relatively small treatment differences between the study drug and placebo. The failure of the trials may be associated with aspects of their design, such as short follow-up and small sample size.
QUESTION: Who should manage lupus nephritis?
Dr. Jayne: Should it be the nephrologist or the rheumatologist? That’s the most controversial issue of all. In reality, it should be a partnership.
This column, "Ask the Expert," regularly appears in Rheumatology News, an Elsevier publication. Dr. Jayne reported no financial conflicts of interest.
Lupus nephritis is the most important complication of systemic lupus erythematosus because it is closely linked to survival and morbidity in patients with the autoimmune disease. It is also one of the most controversial, according to Dr. David R.W. Jayne, director of the Vasculitis and Lupus Clinic at Addenbrooke’s Hospital in Cambridge, England.
Specifically, recent data indicate that more than 40% of SLE patients who develop nephritis develop progressive kidney disease, and 20% die within 12 years, "which means that more than half of the [SLE] patients diagnosed with lupus nephritis reach a hard end point, let alone the other problems that are inherent to the disease," said Dr. Jayne, a nephrologist. Variations in disease presentation, histologic patterns, course, and outcomes complicate management, as does the absence of a single, accepted standard of care and well-defined treatment aims, he said, noting that "there is more uncertainty about how to treat [lupus nephritis] than any other subject within nephrology."
In this month’s column, Dr. Jayne will address some of the reasons behind this uncertainty and the current management options.
QUESTION: At the 2011 Annual European Congress of Rheumatology in London, you stressed that lupus nephritis is a controversial topic within rheumatology and nephrology, and joked that your presentations on the topic are the only forum "where people scream at me." What makes lupus nephritis such a hot-button topic?
Dr. Jayne: We’re dealing with young, often female patients with a potentially devastating disease for which we’ve really had poor evidence to base treatments, and that drives anxiety. Also, the treatment (such as high-dose steroids and cyclophosphamide) carries major toxic risks. The reality is that lupus nephritis is heterogenous and the pathology is complex. The current classification system divides the condition into six classes according to the severity of the lesions observed (Kidney Int. 2004;65:521-30). Most studies focus on proliferative nephritis (classes III and IV) and membranous nephritis (class V) because these are associated with an increased risk of kidney failure, yet are amenable to therapy. In reality, the pathology is more complex because of mixed membranous and proliferative lesions and other kidney glomerular and nonglomerular problems that can occur at the same time. These problems – including thrombotic microangiopathy, podocytopathy, tubulointerstitial nephritis, and vascular disease – are not reflected in the current classification, yet they have an impact on long-term outcomes.
QUESTION: Given the complexity of the disease, what is the optimal management course?
Dr. Jayne: In general, the treatment of lupus nephritis comprises a period of intensive immunosuppressive treatment followed by a period of less-intense immunosuppressive therapy. Opinions regarding optimal treatment vary widely. The reality is, we spend all of our time talking about which immunosuppressive to use and how much steroid to use. But in many ways, that’s one of the less-important aspects of managing the disease. It’s so multifaceted, and there are many other things that contribute to a good outcome. It’s the speed of diagnosis, referral, and initiation of treatment that actually drives improvements in outcome.
QUESTION: What are the current evidence-based treatment options?
Dr. Jayne: There is a shopping list of options. Among them are the National Institutes of Health "long" protocol, which consists of 15 g of pulse cyclophosphamide titrated over 2 years; the NIH "short" protocol, consisting of six pulses of cyclophosphamide, for a total of 7.5 g, followed by a switch to a safer immunosuppressive agent, such as azathioprine or mycophenolate mofetil; and the low-dose Euro-Lupus regimen, including six fixed-dose pulses of 500-mg cyclophosphamide, for a total of 3 g over 12 weeks, followed by azathioprine or mycophenolate.
The recent ALMS (Aspreva Lupus Management Study) compared mycophenolate mofetil vs. intravenous cyclophosphamide with the same dose of background steroids. An alternative option is starting with mycophenolate mofetil at 3 g/day, then stepping down at 6 months to 2 g/day in responders. The data suggest that mycophenolate mofetil is as effective as high-dose intravenous cyclophosphamide for inducing remission.
As maintenance treatment, mycophenolate mofetil appears to be superior to azathioprine, which itself is similarly effective to ciclosporin for preventing or reducing relapse risk (J. Am. Soc. Nephrol. 2009;20:1103-12).
QUESTION: What is the role of steroids in lupus nephritis treatment?
Dr. Jayne: That’s one of the big unanswered questions: what to do with steroids? Whenever we’re involved with trial design, the longest, most agonizing discussions focus on the steroid regimen. There is considerable variability in prednisone regimens, with beginning doses ranging from 0.5 mg/kg per day to 1 mg/kg per day, and tapering over a period of months to maintain control of nephritis and extrarenal symptoms. Studies have shown, not surprisingly, that the higher steroid doses are linked to higher response rates but also with an increased likelihood of severe adverse events. There is also no consensus about when to stop steroids. We generally taper down to 10-15 mg/day by 6 months and continue a lower steroid dose for 2 or more years, but there is also considerable variability in this as well.
Interestingly, we conducted a questionnaire study in which we asked 71 lupus experts whether they would stop steroids in a 21-year-old female with class IV nephritis who was in stable remission for 2 years, and there was an exact balance of opinion: One-third each would stop, wouldn’t stop, and didn’t know. When we looked for differences among the responders, we found that physicians in the United States and Canada had more enthusiasm for stopping steroids, while those in Europe prefer to keep them going. Also, by specialty, nephrologists were keener to stop them and rheumatologists were keener to keep going. In other words, there is no universal standard of care to answer this question.
QUESTION: What is the best measure for assessing treatment response?
Dr. Jayne: Typically, the criteria are proteinuria, urinary sediment, and renal function. The majority of patients will have normal renal function or near-normal renal function when they come to us, so the [glomerular filtration rate] is turning out not to be a useful marker for renal response. Yes, the loss of hematuria is useful and that’s part of a complete response definition, but really it’s the reduction of proteinuria that drives renal response definitions. So a reduction by 50% from baseline to subnephrotic levels (less than 3 g/day) is a partial response, and a complete response is down to less than 0.5 g/day.
Proteinuria means a lot of different things. One of the confusions in managing lupus nephritis is that proteinuria does not just reflect activity. You can switch off all of the activity in the kidney, but the proteinuria declines quite slowly. It takes a long time – up to 3 years – for the proteinuria to get as good as it’s going to get, but too many studies have short, 6-month end points. We really look at 2 years as being the induction period. Initially, proteinuria is reflecting disease activity, but subsequently reflects the recovery phase of the glomerulus. The immune complexes are being solubilized and removed. This is the remodeling phase, which lasts a long time. Then there is the fibrotic phase that contributes a relatively small amount to proteinuria.
For this reason, nephrologists would really love repeat renal biopsies. Several small studies have demonstrated that patients often have persistent disease activity even when proteinuria has gone down to relatively low levels. So even when the parameters we measure have gotten better, the activity may not be gone. A renal biopsy will tell you whether there has been a change in scarring or chronicity.
QUESTION: What’s on the horizon for the management of lupus nephritis?
Dr. Jayne: There are some new treatment directions, including tacrolimus (Prograf). This drug, which is widely used in the prevention of renal transplant rejections, also appears to have benefits for lupus nephritis. Tacrolimus has direct effects on the podocyte where it influences the cytoskeleton and the permeability of the glomerular basement membrane, as well as immunosuppressant effects, so it has a dual action in lupus nephritis, but we need more data.
The role of B-cell depletion has also been explored. Many physicians have been using rituximab (Rituxan) in their clinics for a number of years, and data from retrospective cohort studies suggest that it is effective for relapsing or refractory disease. However, the findings of double-blind, placebo-controlled trials of rituximab and another B-cell–depleting drug, ocrelizumab, when added on top of either mycophenolate mofetil or cyclophosphamide, suggested only relatively small treatment differences between the study drug and placebo. The failure of the trials may be associated with aspects of their design, such as short follow-up and small sample size.
QUESTION: Who should manage lupus nephritis?
Dr. Jayne: Should it be the nephrologist or the rheumatologist? That’s the most controversial issue of all. In reality, it should be a partnership.
This column, "Ask the Expert," regularly appears in Rheumatology News, an Elsevier publication. Dr. Jayne reported no financial conflicts of interest.
Lupus nephritis is the most important complication of systemic lupus erythematosus because it is closely linked to survival and morbidity in patients with the autoimmune disease. It is also one of the most controversial, according to Dr. David R.W. Jayne, director of the Vasculitis and Lupus Clinic at Addenbrooke’s Hospital in Cambridge, England.
Specifically, recent data indicate that more than 40% of SLE patients who develop nephritis develop progressive kidney disease, and 20% die within 12 years, "which means that more than half of the [SLE] patients diagnosed with lupus nephritis reach a hard end point, let alone the other problems that are inherent to the disease," said Dr. Jayne, a nephrologist. Variations in disease presentation, histologic patterns, course, and outcomes complicate management, as does the absence of a single, accepted standard of care and well-defined treatment aims, he said, noting that "there is more uncertainty about how to treat [lupus nephritis] than any other subject within nephrology."
In this month’s column, Dr. Jayne will address some of the reasons behind this uncertainty and the current management options.
QUESTION: At the 2011 Annual European Congress of Rheumatology in London, you stressed that lupus nephritis is a controversial topic within rheumatology and nephrology, and joked that your presentations on the topic are the only forum "where people scream at me." What makes lupus nephritis such a hot-button topic?
Dr. Jayne: We’re dealing with young, often female patients with a potentially devastating disease for which we’ve really had poor evidence to base treatments, and that drives anxiety. Also, the treatment (such as high-dose steroids and cyclophosphamide) carries major toxic risks. The reality is that lupus nephritis is heterogenous and the pathology is complex. The current classification system divides the condition into six classes according to the severity of the lesions observed (Kidney Int. 2004;65:521-30). Most studies focus on proliferative nephritis (classes III and IV) and membranous nephritis (class V) because these are associated with an increased risk of kidney failure, yet are amenable to therapy. In reality, the pathology is more complex because of mixed membranous and proliferative lesions and other kidney glomerular and nonglomerular problems that can occur at the same time. These problems – including thrombotic microangiopathy, podocytopathy, tubulointerstitial nephritis, and vascular disease – are not reflected in the current classification, yet they have an impact on long-term outcomes.
QUESTION: Given the complexity of the disease, what is the optimal management course?
Dr. Jayne: In general, the treatment of lupus nephritis comprises a period of intensive immunosuppressive treatment followed by a period of less-intense immunosuppressive therapy. Opinions regarding optimal treatment vary widely. The reality is, we spend all of our time talking about which immunosuppressive to use and how much steroid to use. But in many ways, that’s one of the less-important aspects of managing the disease. It’s so multifaceted, and there are many other things that contribute to a good outcome. It’s the speed of diagnosis, referral, and initiation of treatment that actually drives improvements in outcome.
QUESTION: What are the current evidence-based treatment options?
Dr. Jayne: There is a shopping list of options. Among them are the National Institutes of Health "long" protocol, which consists of 15 g of pulse cyclophosphamide titrated over 2 years; the NIH "short" protocol, consisting of six pulses of cyclophosphamide, for a total of 7.5 g, followed by a switch to a safer immunosuppressive agent, such as azathioprine or mycophenolate mofetil; and the low-dose Euro-Lupus regimen, including six fixed-dose pulses of 500-mg cyclophosphamide, for a total of 3 g over 12 weeks, followed by azathioprine or mycophenolate.
The recent ALMS (Aspreva Lupus Management Study) compared mycophenolate mofetil vs. intravenous cyclophosphamide with the same dose of background steroids. An alternative option is starting with mycophenolate mofetil at 3 g/day, then stepping down at 6 months to 2 g/day in responders. The data suggest that mycophenolate mofetil is as effective as high-dose intravenous cyclophosphamide for inducing remission.
As maintenance treatment, mycophenolate mofetil appears to be superior to azathioprine, which itself is similarly effective to ciclosporin for preventing or reducing relapse risk (J. Am. Soc. Nephrol. 2009;20:1103-12).
QUESTION: What is the role of steroids in lupus nephritis treatment?
Dr. Jayne: That’s one of the big unanswered questions: what to do with steroids? Whenever we’re involved with trial design, the longest, most agonizing discussions focus on the steroid regimen. There is considerable variability in prednisone regimens, with beginning doses ranging from 0.5 mg/kg per day to 1 mg/kg per day, and tapering over a period of months to maintain control of nephritis and extrarenal symptoms. Studies have shown, not surprisingly, that the higher steroid doses are linked to higher response rates but also with an increased likelihood of severe adverse events. There is also no consensus about when to stop steroids. We generally taper down to 10-15 mg/day by 6 months and continue a lower steroid dose for 2 or more years, but there is also considerable variability in this as well.
Interestingly, we conducted a questionnaire study in which we asked 71 lupus experts whether they would stop steroids in a 21-year-old female with class IV nephritis who was in stable remission for 2 years, and there was an exact balance of opinion: One-third each would stop, wouldn’t stop, and didn’t know. When we looked for differences among the responders, we found that physicians in the United States and Canada had more enthusiasm for stopping steroids, while those in Europe prefer to keep them going. Also, by specialty, nephrologists were keener to stop them and rheumatologists were keener to keep going. In other words, there is no universal standard of care to answer this question.
QUESTION: What is the best measure for assessing treatment response?
Dr. Jayne: Typically, the criteria are proteinuria, urinary sediment, and renal function. The majority of patients will have normal renal function or near-normal renal function when they come to us, so the [glomerular filtration rate] is turning out not to be a useful marker for renal response. Yes, the loss of hematuria is useful and that’s part of a complete response definition, but really it’s the reduction of proteinuria that drives renal response definitions. So a reduction by 50% from baseline to subnephrotic levels (less than 3 g/day) is a partial response, and a complete response is down to less than 0.5 g/day.
Proteinuria means a lot of different things. One of the confusions in managing lupus nephritis is that proteinuria does not just reflect activity. You can switch off all of the activity in the kidney, but the proteinuria declines quite slowly. It takes a long time – up to 3 years – for the proteinuria to get as good as it’s going to get, but too many studies have short, 6-month end points. We really look at 2 years as being the induction period. Initially, proteinuria is reflecting disease activity, but subsequently reflects the recovery phase of the glomerulus. The immune complexes are being solubilized and removed. This is the remodeling phase, which lasts a long time. Then there is the fibrotic phase that contributes a relatively small amount to proteinuria.
For this reason, nephrologists would really love repeat renal biopsies. Several small studies have demonstrated that patients often have persistent disease activity even when proteinuria has gone down to relatively low levels. So even when the parameters we measure have gotten better, the activity may not be gone. A renal biopsy will tell you whether there has been a change in scarring or chronicity.
QUESTION: What’s on the horizon for the management of lupus nephritis?
Dr. Jayne: There are some new treatment directions, including tacrolimus (Prograf). This drug, which is widely used in the prevention of renal transplant rejections, also appears to have benefits for lupus nephritis. Tacrolimus has direct effects on the podocyte where it influences the cytoskeleton and the permeability of the glomerular basement membrane, as well as immunosuppressant effects, so it has a dual action in lupus nephritis, but we need more data.
The role of B-cell depletion has also been explored. Many physicians have been using rituximab (Rituxan) in their clinics for a number of years, and data from retrospective cohort studies suggest that it is effective for relapsing or refractory disease. However, the findings of double-blind, placebo-controlled trials of rituximab and another B-cell–depleting drug, ocrelizumab, when added on top of either mycophenolate mofetil or cyclophosphamide, suggested only relatively small treatment differences between the study drug and placebo. The failure of the trials may be associated with aspects of their design, such as short follow-up and small sample size.
QUESTION: Who should manage lupus nephritis?
Dr. Jayne: Should it be the nephrologist or the rheumatologist? That’s the most controversial issue of all. In reality, it should be a partnership.
This column, "Ask the Expert," regularly appears in Rheumatology News, an Elsevier publication. Dr. Jayne reported no financial conflicts of interest.
FDA: High-Dose Citalopram Tied to Heart Risks
The antidepressant citalopram should not be used at doses greater than 40 mg per day because such doses can lead to prolongation of the QT interval, the Food and Drug Administration announced Aug. 24 in a drug safety communication. Further, the drug should not be prescribed to patients with congenital long QT syndrome, and extra precautions should be taken for patients with other underlying heart conditions, the agency said.
Studies do not show a benefit in the treatment of depression at doses of the selective serotonin reuptake inhibitor higher than 40 mg/day. Previously, the citalopram (Celexa) label stated that some patients might require a dose of 60 mg/day.
The agency’s dosage recommendation is based on postmarketing reports of QT prolongation associated with citalopram and results of a randomized, double-blind, placebo-controlled crossover study evaluating the effects of 20-mg and 60-mg doses of citalopram on the QT interval in adults. The latter study showed that, compared with placebo, the maximum mean prolongations in the individually corrected QT intervals for patients randomized to 20-mg and 60-mg doses of citalopram, respectively, were 8.5 msec and 18.0 msec. The prolongation of the corrected QT interval was estimated to be 12.6 msec, based on the relationship between serum citalopram concentration and QT interval, the FDA statement said.
Because such dose-dependent changes in the electrical activity of the heart can lead to abnormal heart rhythms, including the potentially fatal torsades de pointes, and in the absence of evidence demonstrating that citalopram at doses higher than 40 mg/day is beneficial in the treatment of depression, the FDA determined that citalopram should no longer be used at doses above 40 mg/day and that it should never be used in patients with congenital long QT syndrome.
Also, because individuals with underlying heart conditions, such as heart failure or bradyarrhythmias and those predisposed to insufficient serum potassium and magnesium because of comorbid illness or other drugs are at particular risk, the FDA has made the following safety recommendations:
• Correct hypokalemia and hypomagnesemia before administering citalopram and monitor electrolytes as clinically indicated.
• Consider more frequent electrocardiogram monitoring for patients with congestive heart failure, bradyarrhythmias, or patients on concomitant medications that prolong the QT interval.
• The maximum recommended dose for patients with hepatic impairment, who are older than 60 years, who are CYP 2C19 poor metabolizers, or who are taking cimetidine is 20 mg/day because each of these factors can increase blood levels of citalopram, thus increasing the risk of QT interval prolongation and torsades de points.
• Advise patients to contact a health care professional if they experience signs or symptoms of an abnormal heart rate or rhythm while taking citalopram.
The citalopram drug label has been revised to include the new drug dosage and usage recommendations, and the revised package insert will include information about the potential for QT interval prolongation and torsades de pointes.
The antidepressant citalopram should not be used at doses greater than 40 mg per day because such doses can lead to prolongation of the QT interval, the Food and Drug Administration announced Aug. 24 in a drug safety communication. Further, the drug should not be prescribed to patients with congenital long QT syndrome, and extra precautions should be taken for patients with other underlying heart conditions, the agency said.
Studies do not show a benefit in the treatment of depression at doses of the selective serotonin reuptake inhibitor higher than 40 mg/day. Previously, the citalopram (Celexa) label stated that some patients might require a dose of 60 mg/day.
The agency’s dosage recommendation is based on postmarketing reports of QT prolongation associated with citalopram and results of a randomized, double-blind, placebo-controlled crossover study evaluating the effects of 20-mg and 60-mg doses of citalopram on the QT interval in adults. The latter study showed that, compared with placebo, the maximum mean prolongations in the individually corrected QT intervals for patients randomized to 20-mg and 60-mg doses of citalopram, respectively, were 8.5 msec and 18.0 msec. The prolongation of the corrected QT interval was estimated to be 12.6 msec, based on the relationship between serum citalopram concentration and QT interval, the FDA statement said.
Because such dose-dependent changes in the electrical activity of the heart can lead to abnormal heart rhythms, including the potentially fatal torsades de pointes, and in the absence of evidence demonstrating that citalopram at doses higher than 40 mg/day is beneficial in the treatment of depression, the FDA determined that citalopram should no longer be used at doses above 40 mg/day and that it should never be used in patients with congenital long QT syndrome.
Also, because individuals with underlying heart conditions, such as heart failure or bradyarrhythmias and those predisposed to insufficient serum potassium and magnesium because of comorbid illness or other drugs are at particular risk, the FDA has made the following safety recommendations:
• Correct hypokalemia and hypomagnesemia before administering citalopram and monitor electrolytes as clinically indicated.
• Consider more frequent electrocardiogram monitoring for patients with congestive heart failure, bradyarrhythmias, or patients on concomitant medications that prolong the QT interval.
• The maximum recommended dose for patients with hepatic impairment, who are older than 60 years, who are CYP 2C19 poor metabolizers, or who are taking cimetidine is 20 mg/day because each of these factors can increase blood levels of citalopram, thus increasing the risk of QT interval prolongation and torsades de points.
• Advise patients to contact a health care professional if they experience signs or symptoms of an abnormal heart rate or rhythm while taking citalopram.
The citalopram drug label has been revised to include the new drug dosage and usage recommendations, and the revised package insert will include information about the potential for QT interval prolongation and torsades de pointes.
The antidepressant citalopram should not be used at doses greater than 40 mg per day because such doses can lead to prolongation of the QT interval, the Food and Drug Administration announced Aug. 24 in a drug safety communication. Further, the drug should not be prescribed to patients with congenital long QT syndrome, and extra precautions should be taken for patients with other underlying heart conditions, the agency said.
Studies do not show a benefit in the treatment of depression at doses of the selective serotonin reuptake inhibitor higher than 40 mg/day. Previously, the citalopram (Celexa) label stated that some patients might require a dose of 60 mg/day.
The agency’s dosage recommendation is based on postmarketing reports of QT prolongation associated with citalopram and results of a randomized, double-blind, placebo-controlled crossover study evaluating the effects of 20-mg and 60-mg doses of citalopram on the QT interval in adults. The latter study showed that, compared with placebo, the maximum mean prolongations in the individually corrected QT intervals for patients randomized to 20-mg and 60-mg doses of citalopram, respectively, were 8.5 msec and 18.0 msec. The prolongation of the corrected QT interval was estimated to be 12.6 msec, based on the relationship between serum citalopram concentration and QT interval, the FDA statement said.
Because such dose-dependent changes in the electrical activity of the heart can lead to abnormal heart rhythms, including the potentially fatal torsades de pointes, and in the absence of evidence demonstrating that citalopram at doses higher than 40 mg/day is beneficial in the treatment of depression, the FDA determined that citalopram should no longer be used at doses above 40 mg/day and that it should never be used in patients with congenital long QT syndrome.
Also, because individuals with underlying heart conditions, such as heart failure or bradyarrhythmias and those predisposed to insufficient serum potassium and magnesium because of comorbid illness or other drugs are at particular risk, the FDA has made the following safety recommendations:
• Correct hypokalemia and hypomagnesemia before administering citalopram and monitor electrolytes as clinically indicated.
• Consider more frequent electrocardiogram monitoring for patients with congestive heart failure, bradyarrhythmias, or patients on concomitant medications that prolong the QT interval.
• The maximum recommended dose for patients with hepatic impairment, who are older than 60 years, who are CYP 2C19 poor metabolizers, or who are taking cimetidine is 20 mg/day because each of these factors can increase blood levels of citalopram, thus increasing the risk of QT interval prolongation and torsades de points.
• Advise patients to contact a health care professional if they experience signs or symptoms of an abnormal heart rate or rhythm while taking citalopram.
The citalopram drug label has been revised to include the new drug dosage and usage recommendations, and the revised package insert will include information about the potential for QT interval prolongation and torsades de pointes.
FROM THE FOOD AND DRUG ADMINISTRATION