User login
How multiple infections make malaria worse
Image by Ute Frevert
and Margaret Shear
New research suggests that infections with 2 types of malaria parasite lead to greater health risks because 1 species helps the other thrive.
Investigators sought to understand what happens when the 2 most common malaria parasites cause infection at the same time, as they are known to attack the body in different ways.
The team found the first parasite helps provide the second with more of the resources it needs to prosper.
“Immune responses are assumed to determine the outcome of interactions between parasite species, but our study clearly shows that resources can be more important,” said Sarah Reece, of the University of Edinburgh in Scotland.
“Our findings also challenge ideas that 1 species will outcompete the other, which explains why infections involving 2 parasite species can pose a greater health risk to patients.”
Dr Reece and her colleagues recounted these findings in Ecology Letters.
In humans, the malaria parasite Plasmodium falciparum infects red blood cells of all ages, while the Plasmodium vivax parasite attacks only young red blood cells.
The current study, conducted in mice with equivalent malaria parasites (P chabaudi and P yoelii), showed that the body’s response to the first infection produces more of the type of red blood cell the second parasite needs.
In response to the first infection, millions of red blood cells are destroyed. The body responds by replenishing these cells.
The fresh cells then become infected by the second type of parasite, making the infection worse.
The investigators said these results appear to explain why infections with both P falciparum and P vivax often have worse outcomes for patients than infections with a single malaria parasite.
Image by Ute Frevert
and Margaret Shear
New research suggests that infections with 2 types of malaria parasite lead to greater health risks because 1 species helps the other thrive.
Investigators sought to understand what happens when the 2 most common malaria parasites cause infection at the same time, as they are known to attack the body in different ways.
The team found the first parasite helps provide the second with more of the resources it needs to prosper.
“Immune responses are assumed to determine the outcome of interactions between parasite species, but our study clearly shows that resources can be more important,” said Sarah Reece, of the University of Edinburgh in Scotland.
“Our findings also challenge ideas that 1 species will outcompete the other, which explains why infections involving 2 parasite species can pose a greater health risk to patients.”
Dr Reece and her colleagues recounted these findings in Ecology Letters.
In humans, the malaria parasite Plasmodium falciparum infects red blood cells of all ages, while the Plasmodium vivax parasite attacks only young red blood cells.
The current study, conducted in mice with equivalent malaria parasites (P chabaudi and P yoelii), showed that the body’s response to the first infection produces more of the type of red blood cell the second parasite needs.
In response to the first infection, millions of red blood cells are destroyed. The body responds by replenishing these cells.
The fresh cells then become infected by the second type of parasite, making the infection worse.
The investigators said these results appear to explain why infections with both P falciparum and P vivax often have worse outcomes for patients than infections with a single malaria parasite.
Image by Ute Frevert
and Margaret Shear
New research suggests that infections with 2 types of malaria parasite lead to greater health risks because 1 species helps the other thrive.
Investigators sought to understand what happens when the 2 most common malaria parasites cause infection at the same time, as they are known to attack the body in different ways.
The team found the first parasite helps provide the second with more of the resources it needs to prosper.
“Immune responses are assumed to determine the outcome of interactions between parasite species, but our study clearly shows that resources can be more important,” said Sarah Reece, of the University of Edinburgh in Scotland.
“Our findings also challenge ideas that 1 species will outcompete the other, which explains why infections involving 2 parasite species can pose a greater health risk to patients.”
Dr Reece and her colleagues recounted these findings in Ecology Letters.
In humans, the malaria parasite Plasmodium falciparum infects red blood cells of all ages, while the Plasmodium vivax parasite attacks only young red blood cells.
The current study, conducted in mice with equivalent malaria parasites (P chabaudi and P yoelii), showed that the body’s response to the first infection produces more of the type of red blood cell the second parasite needs.
In response to the first infection, millions of red blood cells are destroyed. The body responds by replenishing these cells.
The fresh cells then become infected by the second type of parasite, making the infection worse.
The investigators said these results appear to explain why infections with both P falciparum and P vivax often have worse outcomes for patients than infections with a single malaria parasite.
Study explains link between malignant hyperthermia and bleeding abnormalities
A new study helps explain why some patients with malignant hyperthermia may suffer from excessive bleeding.
The findings suggest a mutation that causes malignant hyperthermia can disrupt calcium signaling in vascular smooth muscle cells, leading to bleeding abnormalities.
What’s more, researchers found that a drug clinically approved to treat muscle-related symptoms in malignant hyperthermia helped stop bleeding.
Rubén Lopez, of Basel University Hospital in Switzerland, and his colleagues conducted this research and reported their findings in Science Signaling.
Patients with malignant hyperthermia experience dangerously high fever and severe muscle contractions when exposed to general anesthesia.
Malignant hyperthermia is often caused by mutations in the RYR1 gene, which encodes a calcium channel in skeletal muscle called ryanodine receptor type 1 (RyR1).
For some patients with these mutations, malignant hyperthermia is accompanied by a mild bleeding disorder, but whether the 2 conditions are connected has not been clear.
Working in a mouse model of malignant hyperthermia, researchers found that vascular smooth muscle cells with mutated RyR1 displayed frequent spikes in calcium levels, known as calcium sparks. These sparks led to excessive vasodilation and prolonged bleeding.
Blocking the receptor with dantrolene, a drug used to treat malignant hyperthermia, helped reduce bleeding in the mice and in a single human patient, pointing to an unexpected benefit from the drug.
The findings suggest that mutations in RyR1, which is also found in other types of smooth muscle cells such as those in the bladder and uterus, may have a wider range of effects than previously thought.
A new study helps explain why some patients with malignant hyperthermia may suffer from excessive bleeding.
The findings suggest a mutation that causes malignant hyperthermia can disrupt calcium signaling in vascular smooth muscle cells, leading to bleeding abnormalities.
What’s more, researchers found that a drug clinically approved to treat muscle-related symptoms in malignant hyperthermia helped stop bleeding.
Rubén Lopez, of Basel University Hospital in Switzerland, and his colleagues conducted this research and reported their findings in Science Signaling.
Patients with malignant hyperthermia experience dangerously high fever and severe muscle contractions when exposed to general anesthesia.
Malignant hyperthermia is often caused by mutations in the RYR1 gene, which encodes a calcium channel in skeletal muscle called ryanodine receptor type 1 (RyR1).
For some patients with these mutations, malignant hyperthermia is accompanied by a mild bleeding disorder, but whether the 2 conditions are connected has not been clear.
Working in a mouse model of malignant hyperthermia, researchers found that vascular smooth muscle cells with mutated RyR1 displayed frequent spikes in calcium levels, known as calcium sparks. These sparks led to excessive vasodilation and prolonged bleeding.
Blocking the receptor with dantrolene, a drug used to treat malignant hyperthermia, helped reduce bleeding in the mice and in a single human patient, pointing to an unexpected benefit from the drug.
The findings suggest that mutations in RyR1, which is also found in other types of smooth muscle cells such as those in the bladder and uterus, may have a wider range of effects than previously thought.
A new study helps explain why some patients with malignant hyperthermia may suffer from excessive bleeding.
The findings suggest a mutation that causes malignant hyperthermia can disrupt calcium signaling in vascular smooth muscle cells, leading to bleeding abnormalities.
What’s more, researchers found that a drug clinically approved to treat muscle-related symptoms in malignant hyperthermia helped stop bleeding.
Rubén Lopez, of Basel University Hospital in Switzerland, and his colleagues conducted this research and reported their findings in Science Signaling.
Patients with malignant hyperthermia experience dangerously high fever and severe muscle contractions when exposed to general anesthesia.
Malignant hyperthermia is often caused by mutations in the RYR1 gene, which encodes a calcium channel in skeletal muscle called ryanodine receptor type 1 (RyR1).
For some patients with these mutations, malignant hyperthermia is accompanied by a mild bleeding disorder, but whether the 2 conditions are connected has not been clear.
Working in a mouse model of malignant hyperthermia, researchers found that vascular smooth muscle cells with mutated RyR1 displayed frequent spikes in calcium levels, known as calcium sparks. These sparks led to excessive vasodilation and prolonged bleeding.
Blocking the receptor with dantrolene, a drug used to treat malignant hyperthermia, helped reduce bleeding in the mice and in a single human patient, pointing to an unexpected benefit from the drug.
The findings suggest that mutations in RyR1, which is also found in other types of smooth muscle cells such as those in the bladder and uterus, may have a wider range of effects than previously thought.
David Henry's JCSO podcast, July 2016
In the July podcast for The Journal of Community and Supportive Oncology, Dr David Henry discusses an editorial by Dr Linda bosserman, in which she presents the case for pathways and the importance of processes and team work in paving the way for value-based care. Survivor care is the focus of 2 Original Reports in which investigators report on adolescent and young adult perceptions of cancer survivor care and supportive programming and on the symptoms, unmet need, and quality of life among recent breast cancer survivors. Also in the Original Report section are reports on the impact of loss of income and medicine costs on the financial burden for cancer patients in Australia and the use of a gene-panel for testing for hereditary ovarian cancer. Dr Henry also looks at 2 Case Reports, one in which a patient undergoes multivisceral resection for growing teratoma syndrome and another in which a patient presents with aleukemic acute lymphoblastic leukemia with unusual clinical features. Diabetes management in cancer patients is the topic of a lengthy and informative interview between Dr Henry and Dr Todd Brown.
Listen to the podcast below.
In the July podcast for The Journal of Community and Supportive Oncology, Dr David Henry discusses an editorial by Dr Linda bosserman, in which she presents the case for pathways and the importance of processes and team work in paving the way for value-based care. Survivor care is the focus of 2 Original Reports in which investigators report on adolescent and young adult perceptions of cancer survivor care and supportive programming and on the symptoms, unmet need, and quality of life among recent breast cancer survivors. Also in the Original Report section are reports on the impact of loss of income and medicine costs on the financial burden for cancer patients in Australia and the use of a gene-panel for testing for hereditary ovarian cancer. Dr Henry also looks at 2 Case Reports, one in which a patient undergoes multivisceral resection for growing teratoma syndrome and another in which a patient presents with aleukemic acute lymphoblastic leukemia with unusual clinical features. Diabetes management in cancer patients is the topic of a lengthy and informative interview between Dr Henry and Dr Todd Brown.
Listen to the podcast below.
In the July podcast for The Journal of Community and Supportive Oncology, Dr David Henry discusses an editorial by Dr Linda bosserman, in which she presents the case for pathways and the importance of processes and team work in paving the way for value-based care. Survivor care is the focus of 2 Original Reports in which investigators report on adolescent and young adult perceptions of cancer survivor care and supportive programming and on the symptoms, unmet need, and quality of life among recent breast cancer survivors. Also in the Original Report section are reports on the impact of loss of income and medicine costs on the financial burden for cancer patients in Australia and the use of a gene-panel for testing for hereditary ovarian cancer. Dr Henry also looks at 2 Case Reports, one in which a patient undergoes multivisceral resection for growing teratoma syndrome and another in which a patient presents with aleukemic acute lymphoblastic leukemia with unusual clinical features. Diabetes management in cancer patients is the topic of a lengthy and informative interview between Dr Henry and Dr Todd Brown.
Listen to the podcast below.
Absorb bioresorbable vascular scaffold wins FDA approval
The Food and Drug Administration approved the first fully absorbable vascular scaffold designed for use in coronary arteries, the Absorb GT1 bioresorbable vascular scaffold system, made by Abbott.
Concurrent with the FDA’s announcement on July 5, the company said that it plans to start immediate commercial rollout of the Absorb bioresorbable vascular scaffold (BVS). Initial availability will be limited to the roughly 100 most active sites that participated in the ABSORB III trial, the pivotal study that established noninferiority of the BVS, compared with a state-of-the-art metallic coronary stent during 1-year follow-up, according to a company spokesman.
However, the ABSORB III results, reported in October 2015, failed to document any superiority of the BVS, compared with a metallic stent. The potential advantages of a BVS remain for now unproven, and are based on the potential long-term advantages of using devices in percutaneous coronary interventions that slowly degrade away and thereby eliminate a residual metallic structure in a patient’s coronaries and the long-term threat they could pose for thrombosis or interference with subsequent coronary procedures.
“All the potential advantages are hypothetical at this point,” said Hiram G. Bezerra, MD, an investigator in the ABSORB III trial and director of the cardiac catheterization laboratory at University Hospitals Case Medical Center in Cleveland. However, “if you have a metallic stent it lasts a lifetime, creating a metallic cage” that could interfere with a possible later coronary procedure or be the site for thrombus formation. Disappearance of the BVS also creates the possibility for eventual restoration of more normal vasomotion in the coronary wall, said Dr. Bezerra, a self-professed “enthusiast” for the BVS alternative.
A major limiting factor for BVS use today is coronary diameter because the Absorb BVS is bulkier than metallic stents. The ABSORB III trial limited use of the BVS to coronary vessels with a reference-vessel diameter by visual assessment of at least 2.5 mm, with an upper limit of 3.75 mm. Other limiting factors can be coronary calcification and tortuosity, although Dr. Bezerra said that these obstacles are usually overcome with a more time-consuming procedure if the operator is committed to placing a BVS.
Another variable will be the cost of the BVS. According to the Abbott spokesman, the device “will be priced so that it will be broadly accessible to hospitals.” Also, the Absorb BVS will receive payer reimbursement comparable to a drug-eluting stent using existing reimbursement codes, the spokesman said. Abbott will require inexperienced operators to take a training course to learn proper placement technique.
Dr. Bezerra admitted that he is probably an outlier in his plan to quickly make the BVS a mainstay of his practice. “I think adoption will be slow in the beginning” for most U.S. operators, he predicted. One of his Cleveland colleagues who spoke about the near-term prospects BVS use last October when the ABSORB III results came out predicted that immediate use might occur in about 10%-15% of patients undergoing percutaneous coronary interventions, similar to the usage level in Europe where this BVS has been available for several years.
Dr. Bezerra has been a consultant to Abbott and St. Jude. He was an investigator on the ABSORB III trial.
On Twitter @mitchelzoler
The Food and Drug Administration approved the first fully absorbable vascular scaffold designed for use in coronary arteries, the Absorb GT1 bioresorbable vascular scaffold system, made by Abbott.
Concurrent with the FDA’s announcement on July 5, the company said that it plans to start immediate commercial rollout of the Absorb bioresorbable vascular scaffold (BVS). Initial availability will be limited to the roughly 100 most active sites that participated in the ABSORB III trial, the pivotal study that established noninferiority of the BVS, compared with a state-of-the-art metallic coronary stent during 1-year follow-up, according to a company spokesman.
However, the ABSORB III results, reported in October 2015, failed to document any superiority of the BVS, compared with a metallic stent. The potential advantages of a BVS remain for now unproven, and are based on the potential long-term advantages of using devices in percutaneous coronary interventions that slowly degrade away and thereby eliminate a residual metallic structure in a patient’s coronaries and the long-term threat they could pose for thrombosis or interference with subsequent coronary procedures.
“All the potential advantages are hypothetical at this point,” said Hiram G. Bezerra, MD, an investigator in the ABSORB III trial and director of the cardiac catheterization laboratory at University Hospitals Case Medical Center in Cleveland. However, “if you have a metallic stent it lasts a lifetime, creating a metallic cage” that could interfere with a possible later coronary procedure or be the site for thrombus formation. Disappearance of the BVS also creates the possibility for eventual restoration of more normal vasomotion in the coronary wall, said Dr. Bezerra, a self-professed “enthusiast” for the BVS alternative.
A major limiting factor for BVS use today is coronary diameter because the Absorb BVS is bulkier than metallic stents. The ABSORB III trial limited use of the BVS to coronary vessels with a reference-vessel diameter by visual assessment of at least 2.5 mm, with an upper limit of 3.75 mm. Other limiting factors can be coronary calcification and tortuosity, although Dr. Bezerra said that these obstacles are usually overcome with a more time-consuming procedure if the operator is committed to placing a BVS.
Another variable will be the cost of the BVS. According to the Abbott spokesman, the device “will be priced so that it will be broadly accessible to hospitals.” Also, the Absorb BVS will receive payer reimbursement comparable to a drug-eluting stent using existing reimbursement codes, the spokesman said. Abbott will require inexperienced operators to take a training course to learn proper placement technique.
Dr. Bezerra admitted that he is probably an outlier in his plan to quickly make the BVS a mainstay of his practice. “I think adoption will be slow in the beginning” for most U.S. operators, he predicted. One of his Cleveland colleagues who spoke about the near-term prospects BVS use last October when the ABSORB III results came out predicted that immediate use might occur in about 10%-15% of patients undergoing percutaneous coronary interventions, similar to the usage level in Europe where this BVS has been available for several years.
Dr. Bezerra has been a consultant to Abbott and St. Jude. He was an investigator on the ABSORB III trial.
On Twitter @mitchelzoler
The Food and Drug Administration approved the first fully absorbable vascular scaffold designed for use in coronary arteries, the Absorb GT1 bioresorbable vascular scaffold system, made by Abbott.
Concurrent with the FDA’s announcement on July 5, the company said that it plans to start immediate commercial rollout of the Absorb bioresorbable vascular scaffold (BVS). Initial availability will be limited to the roughly 100 most active sites that participated in the ABSORB III trial, the pivotal study that established noninferiority of the BVS, compared with a state-of-the-art metallic coronary stent during 1-year follow-up, according to a company spokesman.
However, the ABSORB III results, reported in October 2015, failed to document any superiority of the BVS, compared with a metallic stent. The potential advantages of a BVS remain for now unproven, and are based on the potential long-term advantages of using devices in percutaneous coronary interventions that slowly degrade away and thereby eliminate a residual metallic structure in a patient’s coronaries and the long-term threat they could pose for thrombosis or interference with subsequent coronary procedures.
“All the potential advantages are hypothetical at this point,” said Hiram G. Bezerra, MD, an investigator in the ABSORB III trial and director of the cardiac catheterization laboratory at University Hospitals Case Medical Center in Cleveland. However, “if you have a metallic stent it lasts a lifetime, creating a metallic cage” that could interfere with a possible later coronary procedure or be the site for thrombus formation. Disappearance of the BVS also creates the possibility for eventual restoration of more normal vasomotion in the coronary wall, said Dr. Bezerra, a self-professed “enthusiast” for the BVS alternative.
A major limiting factor for BVS use today is coronary diameter because the Absorb BVS is bulkier than metallic stents. The ABSORB III trial limited use of the BVS to coronary vessels with a reference-vessel diameter by visual assessment of at least 2.5 mm, with an upper limit of 3.75 mm. Other limiting factors can be coronary calcification and tortuosity, although Dr. Bezerra said that these obstacles are usually overcome with a more time-consuming procedure if the operator is committed to placing a BVS.
Another variable will be the cost of the BVS. According to the Abbott spokesman, the device “will be priced so that it will be broadly accessible to hospitals.” Also, the Absorb BVS will receive payer reimbursement comparable to a drug-eluting stent using existing reimbursement codes, the spokesman said. Abbott will require inexperienced operators to take a training course to learn proper placement technique.
Dr. Bezerra admitted that he is probably an outlier in his plan to quickly make the BVS a mainstay of his practice. “I think adoption will be slow in the beginning” for most U.S. operators, he predicted. One of his Cleveland colleagues who spoke about the near-term prospects BVS use last October when the ABSORB III results came out predicted that immediate use might occur in about 10%-15% of patients undergoing percutaneous coronary interventions, similar to the usage level in Europe where this BVS has been available for several years.
Dr. Bezerra has been a consultant to Abbott and St. Jude. He was an investigator on the ABSORB III trial.
On Twitter @mitchelzoler
HCQ eye toxicity needs experience to assess
LONDON – Retinopathy in patients taking long-term hydroxychloroquine for rheumatic conditions requires assessment by those experienced with specialized ophthalmic imaging, according to study findings presented at the European Congress of Rheumatology.
Nonspecific abnormalities, which often are unrelated to hydroxychloroquine (HCQ), can be seen with many of the tests recommended by current ophthalmology guidelines. These changes need “careful interpretation by retina specialists,” the study’s investigators wrote in a poster presentation.
HCQ is used widely for the treatment of systemic lupus erythematosus (SLE), rheumatoid arthritis, and many other inflammatory or autoimmune conditions, but it can cause irreversible eye damage and is often associated with prolonged (greater than 5 years) use. Specifically, it can cause a type of end-stage retinopathy called bull’s-eye maculopathy, which is where the fovea becomes hyperpigmented, much like the bull’s-eye on a dartboard. This can lead to substantial vision loss (blind spots) if not caught early.
Although it is reasonably rare to develop end-stage retinopathy, there is currently no treatment for HCQ-induced retinopathy. Stopping the drug may not necessarily stop the retinal damage, and drug withdrawal may not be an option in many patients given the lack of alternative options to treat the symptoms of SLE, study author and ophthalmologist Syed Mahmood Ali Shah, MBBS, MD, said in an interview.
Dr. Shah and his associates at Johns Hopkins University in Baltimore reported on applying the 2011 American Academy of Ophthalmology (AAO) guidelines on screening for HCQ retinopathy (Ophthalmology. 2011;118:415-22) to an academic practice. They also estimated the prevalence of HCQ retinopathy among 135 consecutively treated patients with SLE using recommended tests. The mean duration of HCQ use was 12.5 years.
The 2011 AAO guidelines – which in March 2016 were updated (Ophthalmology 2016 Jun;123:1386-94) – recommended the use of three “ancillary” tests in addition to the usual clinical ophthalmic examination and assessment of visual fields: optical coherence tomography (OCT), fundus autofluorescence (FAF), and multifocal electroretinography (mfERG). Dr. Shah and his colleagues used these three tests together with eye-tracking microperimetry (MP) as a substitute for Humphrey Visual Fields (HVF), which is a common visual field test used in the United States.
One difference between the 2011 guidelines and 2016 revision is that “the baseline exam can now be performed relying [only] on the fundus exam, with additional imaging required only for abnormal patients,” Dr. Shah said. “Overall, the guidelines have not changed on how often and how much you follow up,” he added. “The change is that there is no need to do these tests at baseline unless changes of the fundus are present.” However, OCT has become more widely used in many offices and has been recognized as the most useful objective test and shall be performed if there are any abnormal findings of the fundus.
A total of 266 eyes were examined using these imaging methods and interpreted by experienced retina specialists. Overall, HCQ-related abnormalities were noted in 14 (5%) eyes using OCT, 18 (7%) using FAF, 27 (10%) eyes using mfERG, and 20 (7%) using MP.
MP had the lowest discrepancy between the overall number of eyes with abnormalities (72 [27%] of 266) detected and the number of eyes with abnormalities related to HCQ (20 [28%] of 72), followed by OCT (21% and 25%, respectively), FAF (19% and 35%) and mfERG (37% and 28%). Only four patients (3%) showed changes in all four tests suggestive of HCQ retinopathy.
In the absence of baseline data from the AAO recommended ancillary tests before the use of HCQ, “it may be difficult to interpret changes seen on these tests since most of the screenings are done by regular ophthalmologists who lack the equipment and experience with specialized testing such as mfERG, FAF, and OCT,” Dr. Shah and his coauthors noted. “We found a substantial number of cases with abnormalities unrelated to HCQ.”
Giving some practical advice, Dr. Shah noted that “before a patient starts treatment with HCQ, they should undergo a baseline ophthalmic assessment. Then if the patient complains of any vision changes, even if they have been taking the drug for less than 5 years, they should be reassessed.”
While repeat follow-up is, of course, necessary, he intimated that it is necessary to find a balance of risk and cost in regard to the frequency of screening for drug-related damage. “The American Academy of Ophthalmology currently recommends that a baseline fundus exam be performed shortly after starting HCQ. Ancillary OCT and visual fields shall only be performed if the fundus is abnormal at this baseline exam. However, since most retina specialists get OCT and visual field testing anyway it is wise to look at these as well,” he suggested. After 5 years of using the drug, they must be seen more regularly, and this is the point when ophthalmologists can decide if this should be every 6 months or annually, with the latter recommended by the AAO guidelines for patients with no additional risk factors.
The study was supported by noncommercial grants. Dr. Shah had no conflicts of interest to disclose.
LONDON – Retinopathy in patients taking long-term hydroxychloroquine for rheumatic conditions requires assessment by those experienced with specialized ophthalmic imaging, according to study findings presented at the European Congress of Rheumatology.
Nonspecific abnormalities, which often are unrelated to hydroxychloroquine (HCQ), can be seen with many of the tests recommended by current ophthalmology guidelines. These changes need “careful interpretation by retina specialists,” the study’s investigators wrote in a poster presentation.
HCQ is used widely for the treatment of systemic lupus erythematosus (SLE), rheumatoid arthritis, and many other inflammatory or autoimmune conditions, but it can cause irreversible eye damage and is often associated with prolonged (greater than 5 years) use. Specifically, it can cause a type of end-stage retinopathy called bull’s-eye maculopathy, which is where the fovea becomes hyperpigmented, much like the bull’s-eye on a dartboard. This can lead to substantial vision loss (blind spots) if not caught early.
Although it is reasonably rare to develop end-stage retinopathy, there is currently no treatment for HCQ-induced retinopathy. Stopping the drug may not necessarily stop the retinal damage, and drug withdrawal may not be an option in many patients given the lack of alternative options to treat the symptoms of SLE, study author and ophthalmologist Syed Mahmood Ali Shah, MBBS, MD, said in an interview.
Dr. Shah and his associates at Johns Hopkins University in Baltimore reported on applying the 2011 American Academy of Ophthalmology (AAO) guidelines on screening for HCQ retinopathy (Ophthalmology. 2011;118:415-22) to an academic practice. They also estimated the prevalence of HCQ retinopathy among 135 consecutively treated patients with SLE using recommended tests. The mean duration of HCQ use was 12.5 years.
The 2011 AAO guidelines – which in March 2016 were updated (Ophthalmology 2016 Jun;123:1386-94) – recommended the use of three “ancillary” tests in addition to the usual clinical ophthalmic examination and assessment of visual fields: optical coherence tomography (OCT), fundus autofluorescence (FAF), and multifocal electroretinography (mfERG). Dr. Shah and his colleagues used these three tests together with eye-tracking microperimetry (MP) as a substitute for Humphrey Visual Fields (HVF), which is a common visual field test used in the United States.
One difference between the 2011 guidelines and 2016 revision is that “the baseline exam can now be performed relying [only] on the fundus exam, with additional imaging required only for abnormal patients,” Dr. Shah said. “Overall, the guidelines have not changed on how often and how much you follow up,” he added. “The change is that there is no need to do these tests at baseline unless changes of the fundus are present.” However, OCT has become more widely used in many offices and has been recognized as the most useful objective test and shall be performed if there are any abnormal findings of the fundus.
A total of 266 eyes were examined using these imaging methods and interpreted by experienced retina specialists. Overall, HCQ-related abnormalities were noted in 14 (5%) eyes using OCT, 18 (7%) using FAF, 27 (10%) eyes using mfERG, and 20 (7%) using MP.
MP had the lowest discrepancy between the overall number of eyes with abnormalities (72 [27%] of 266) detected and the number of eyes with abnormalities related to HCQ (20 [28%] of 72), followed by OCT (21% and 25%, respectively), FAF (19% and 35%) and mfERG (37% and 28%). Only four patients (3%) showed changes in all four tests suggestive of HCQ retinopathy.
In the absence of baseline data from the AAO recommended ancillary tests before the use of HCQ, “it may be difficult to interpret changes seen on these tests since most of the screenings are done by regular ophthalmologists who lack the equipment and experience with specialized testing such as mfERG, FAF, and OCT,” Dr. Shah and his coauthors noted. “We found a substantial number of cases with abnormalities unrelated to HCQ.”
Giving some practical advice, Dr. Shah noted that “before a patient starts treatment with HCQ, they should undergo a baseline ophthalmic assessment. Then if the patient complains of any vision changes, even if they have been taking the drug for less than 5 years, they should be reassessed.”
While repeat follow-up is, of course, necessary, he intimated that it is necessary to find a balance of risk and cost in regard to the frequency of screening for drug-related damage. “The American Academy of Ophthalmology currently recommends that a baseline fundus exam be performed shortly after starting HCQ. Ancillary OCT and visual fields shall only be performed if the fundus is abnormal at this baseline exam. However, since most retina specialists get OCT and visual field testing anyway it is wise to look at these as well,” he suggested. After 5 years of using the drug, they must be seen more regularly, and this is the point when ophthalmologists can decide if this should be every 6 months or annually, with the latter recommended by the AAO guidelines for patients with no additional risk factors.
The study was supported by noncommercial grants. Dr. Shah had no conflicts of interest to disclose.
LONDON – Retinopathy in patients taking long-term hydroxychloroquine for rheumatic conditions requires assessment by those experienced with specialized ophthalmic imaging, according to study findings presented at the European Congress of Rheumatology.
Nonspecific abnormalities, which often are unrelated to hydroxychloroquine (HCQ), can be seen with many of the tests recommended by current ophthalmology guidelines. These changes need “careful interpretation by retina specialists,” the study’s investigators wrote in a poster presentation.
HCQ is used widely for the treatment of systemic lupus erythematosus (SLE), rheumatoid arthritis, and many other inflammatory or autoimmune conditions, but it can cause irreversible eye damage and is often associated with prolonged (greater than 5 years) use. Specifically, it can cause a type of end-stage retinopathy called bull’s-eye maculopathy, which is where the fovea becomes hyperpigmented, much like the bull’s-eye on a dartboard. This can lead to substantial vision loss (blind spots) if not caught early.
Although it is reasonably rare to develop end-stage retinopathy, there is currently no treatment for HCQ-induced retinopathy. Stopping the drug may not necessarily stop the retinal damage, and drug withdrawal may not be an option in many patients given the lack of alternative options to treat the symptoms of SLE, study author and ophthalmologist Syed Mahmood Ali Shah, MBBS, MD, said in an interview.
Dr. Shah and his associates at Johns Hopkins University in Baltimore reported on applying the 2011 American Academy of Ophthalmology (AAO) guidelines on screening for HCQ retinopathy (Ophthalmology. 2011;118:415-22) to an academic practice. They also estimated the prevalence of HCQ retinopathy among 135 consecutively treated patients with SLE using recommended tests. The mean duration of HCQ use was 12.5 years.
The 2011 AAO guidelines – which in March 2016 were updated (Ophthalmology 2016 Jun;123:1386-94) – recommended the use of three “ancillary” tests in addition to the usual clinical ophthalmic examination and assessment of visual fields: optical coherence tomography (OCT), fundus autofluorescence (FAF), and multifocal electroretinography (mfERG). Dr. Shah and his colleagues used these three tests together with eye-tracking microperimetry (MP) as a substitute for Humphrey Visual Fields (HVF), which is a common visual field test used in the United States.
One difference between the 2011 guidelines and 2016 revision is that “the baseline exam can now be performed relying [only] on the fundus exam, with additional imaging required only for abnormal patients,” Dr. Shah said. “Overall, the guidelines have not changed on how often and how much you follow up,” he added. “The change is that there is no need to do these tests at baseline unless changes of the fundus are present.” However, OCT has become more widely used in many offices and has been recognized as the most useful objective test and shall be performed if there are any abnormal findings of the fundus.
A total of 266 eyes were examined using these imaging methods and interpreted by experienced retina specialists. Overall, HCQ-related abnormalities were noted in 14 (5%) eyes using OCT, 18 (7%) using FAF, 27 (10%) eyes using mfERG, and 20 (7%) using MP.
MP had the lowest discrepancy between the overall number of eyes with abnormalities (72 [27%] of 266) detected and the number of eyes with abnormalities related to HCQ (20 [28%] of 72), followed by OCT (21% and 25%, respectively), FAF (19% and 35%) and mfERG (37% and 28%). Only four patients (3%) showed changes in all four tests suggestive of HCQ retinopathy.
In the absence of baseline data from the AAO recommended ancillary tests before the use of HCQ, “it may be difficult to interpret changes seen on these tests since most of the screenings are done by regular ophthalmologists who lack the equipment and experience with specialized testing such as mfERG, FAF, and OCT,” Dr. Shah and his coauthors noted. “We found a substantial number of cases with abnormalities unrelated to HCQ.”
Giving some practical advice, Dr. Shah noted that “before a patient starts treatment with HCQ, they should undergo a baseline ophthalmic assessment. Then if the patient complains of any vision changes, even if they have been taking the drug for less than 5 years, they should be reassessed.”
While repeat follow-up is, of course, necessary, he intimated that it is necessary to find a balance of risk and cost in regard to the frequency of screening for drug-related damage. “The American Academy of Ophthalmology currently recommends that a baseline fundus exam be performed shortly after starting HCQ. Ancillary OCT and visual fields shall only be performed if the fundus is abnormal at this baseline exam. However, since most retina specialists get OCT and visual field testing anyway it is wise to look at these as well,” he suggested. After 5 years of using the drug, they must be seen more regularly, and this is the point when ophthalmologists can decide if this should be every 6 months or annually, with the latter recommended by the AAO guidelines for patients with no additional risk factors.
The study was supported by noncommercial grants. Dr. Shah had no conflicts of interest to disclose.
AT THE EULAR 2016 CONGRESS
Key clinical point: Several eye abnormalities can be mistaken for hydroxychloroquine-related eye toxicity, making specialist ophthalmic assessment paramount.
Major finding: Only four patients (3%) showed changes in all four tests suggestive of HCQ retinopathy.
Data source: Observational study of 135 patients with SLE being seen for suspected hydroxychloroquine-related retinopathy at an academic practice
Disclosures: The study was supported by noncommercial grants. Dr. Shah had no conflicts of interest to disclose.
mtDNA level predicts IVF embryo viability
HELSINKI – Mitochondrial DNA level appears to be a useful biomarker for in vitro fertilization embryo viability, according to findings from a blinded prospective non-selection study.
An analysis of 280 chromosomally normal blastocysts showed that 15 (5.4%) contained unusually high levels of mitochondrial DNA (mtDNA) and the remaining blastocysts had normal or low mtDNA levels. Of 111 of the blastocyst transfers for which outcome data were available, 78 (70%) led to ongoing pregnancies, and all of those involved blastocysts with normal or low mtDNA levels, while 8 of the 33 blastocysts that failed to implant (24%) had unusually high mtDNA levels, Elpida Fragouli, PhD, reported at the annual meeting of the European Society of Human Reproduction and Embryology.
Thus, the ongoing pregnancy rate for morphologically good, euploid blastocysts was 76% for those with normal/low mtDNA levels, compared with 0% for those with elevated mtDNA levels – a highly statistically significant difference. The overall pregnancy rate was 70%, said Dr. Fragouli of Reprogenetics UK and the University of Oxford.
The blastocysts in the study were generated by 143 couples who underwent IVF in a single clinic. All blastocysts were biopsied and shown to be chromosomally normal using preimplantation genetic screening.
“The study demonstrates that mitochondrial DNA levels are highly predictive of an embryo’s implantation potential,” Dr. Fragouli said, noting that the “very robust” findings could potentially enhance embryo selection and improve IVF outcomes.
The methodology used in the study has been extensively validated, she said. However, a randomized clinical trial will be necessary to determine the true extent of any clinical benefit, she added, noting that research is also needed to improve understanding of the biology of mtDNA expansion.
The findings are of particular interest, because while it is well known that chromosomal abnormality in embryos is common and increases with age, and is the main cause of implantation failure, it has been less clear why about a third of euploid embryos fail to produce a pregnancy.
“The combination of chromosome analysis and mitochondrial assessment may now represent the most accurate and predictive measure of embryo viability with great potential for improving IVF outcome,” according to an ESHRE press release on the findings.
Levels of mtDNA can be quickly measured using polymerase chain reaction. Next generation sequencing can also be used, Dr. Fragouli noted. However, since aneuploidy remains the most common cause of embryo implantation failure, mtDNA and chromosome testing would be necessary.
“Mitochondrial analysis does not replace [aneuploidy screening]. It is the combination of the two methods ... that is so powerful,” she said, noting that efforts are underway to develop an approach to assessing chromosome content and mtDNA simultaneously to reduce the extra cost.
The group has started offering mtDNA quantification clinically in the United States and has applied to the Human Fertilisation and Embryology Authority for a license to use the testing in the United Kingdom.
Reprogenetics provided funding for this study.
HELSINKI – Mitochondrial DNA level appears to be a useful biomarker for in vitro fertilization embryo viability, according to findings from a blinded prospective non-selection study.
An analysis of 280 chromosomally normal blastocysts showed that 15 (5.4%) contained unusually high levels of mitochondrial DNA (mtDNA) and the remaining blastocysts had normal or low mtDNA levels. Of 111 of the blastocyst transfers for which outcome data were available, 78 (70%) led to ongoing pregnancies, and all of those involved blastocysts with normal or low mtDNA levels, while 8 of the 33 blastocysts that failed to implant (24%) had unusually high mtDNA levels, Elpida Fragouli, PhD, reported at the annual meeting of the European Society of Human Reproduction and Embryology.
Thus, the ongoing pregnancy rate for morphologically good, euploid blastocysts was 76% for those with normal/low mtDNA levels, compared with 0% for those with elevated mtDNA levels – a highly statistically significant difference. The overall pregnancy rate was 70%, said Dr. Fragouli of Reprogenetics UK and the University of Oxford.
The blastocysts in the study were generated by 143 couples who underwent IVF in a single clinic. All blastocysts were biopsied and shown to be chromosomally normal using preimplantation genetic screening.
“The study demonstrates that mitochondrial DNA levels are highly predictive of an embryo’s implantation potential,” Dr. Fragouli said, noting that the “very robust” findings could potentially enhance embryo selection and improve IVF outcomes.
The methodology used in the study has been extensively validated, she said. However, a randomized clinical trial will be necessary to determine the true extent of any clinical benefit, she added, noting that research is also needed to improve understanding of the biology of mtDNA expansion.
The findings are of particular interest, because while it is well known that chromosomal abnormality in embryos is common and increases with age, and is the main cause of implantation failure, it has been less clear why about a third of euploid embryos fail to produce a pregnancy.
“The combination of chromosome analysis and mitochondrial assessment may now represent the most accurate and predictive measure of embryo viability with great potential for improving IVF outcome,” according to an ESHRE press release on the findings.
Levels of mtDNA can be quickly measured using polymerase chain reaction. Next generation sequencing can also be used, Dr. Fragouli noted. However, since aneuploidy remains the most common cause of embryo implantation failure, mtDNA and chromosome testing would be necessary.
“Mitochondrial analysis does not replace [aneuploidy screening]. It is the combination of the two methods ... that is so powerful,” she said, noting that efforts are underway to develop an approach to assessing chromosome content and mtDNA simultaneously to reduce the extra cost.
The group has started offering mtDNA quantification clinically in the United States and has applied to the Human Fertilisation and Embryology Authority for a license to use the testing in the United Kingdom.
Reprogenetics provided funding for this study.
HELSINKI – Mitochondrial DNA level appears to be a useful biomarker for in vitro fertilization embryo viability, according to findings from a blinded prospective non-selection study.
An analysis of 280 chromosomally normal blastocysts showed that 15 (5.4%) contained unusually high levels of mitochondrial DNA (mtDNA) and the remaining blastocysts had normal or low mtDNA levels. Of 111 of the blastocyst transfers for which outcome data were available, 78 (70%) led to ongoing pregnancies, and all of those involved blastocysts with normal or low mtDNA levels, while 8 of the 33 blastocysts that failed to implant (24%) had unusually high mtDNA levels, Elpida Fragouli, PhD, reported at the annual meeting of the European Society of Human Reproduction and Embryology.
Thus, the ongoing pregnancy rate for morphologically good, euploid blastocysts was 76% for those with normal/low mtDNA levels, compared with 0% for those with elevated mtDNA levels – a highly statistically significant difference. The overall pregnancy rate was 70%, said Dr. Fragouli of Reprogenetics UK and the University of Oxford.
The blastocysts in the study were generated by 143 couples who underwent IVF in a single clinic. All blastocysts were biopsied and shown to be chromosomally normal using preimplantation genetic screening.
“The study demonstrates that mitochondrial DNA levels are highly predictive of an embryo’s implantation potential,” Dr. Fragouli said, noting that the “very robust” findings could potentially enhance embryo selection and improve IVF outcomes.
The methodology used in the study has been extensively validated, she said. However, a randomized clinical trial will be necessary to determine the true extent of any clinical benefit, she added, noting that research is also needed to improve understanding of the biology of mtDNA expansion.
The findings are of particular interest, because while it is well known that chromosomal abnormality in embryos is common and increases with age, and is the main cause of implantation failure, it has been less clear why about a third of euploid embryos fail to produce a pregnancy.
“The combination of chromosome analysis and mitochondrial assessment may now represent the most accurate and predictive measure of embryo viability with great potential for improving IVF outcome,” according to an ESHRE press release on the findings.
Levels of mtDNA can be quickly measured using polymerase chain reaction. Next generation sequencing can also be used, Dr. Fragouli noted. However, since aneuploidy remains the most common cause of embryo implantation failure, mtDNA and chromosome testing would be necessary.
“Mitochondrial analysis does not replace [aneuploidy screening]. It is the combination of the two methods ... that is so powerful,” she said, noting that efforts are underway to develop an approach to assessing chromosome content and mtDNA simultaneously to reduce the extra cost.
The group has started offering mtDNA quantification clinically in the United States and has applied to the Human Fertilisation and Embryology Authority for a license to use the testing in the United Kingdom.
Reprogenetics provided funding for this study.
AT ESHRE 2016
Key clinical point: Mitochondrial DNA level may offer a way to assess embryo viability when doing in vitro fertilization.
Major finding: The ongoing pregnancy rate for euploid blastocysts was 76% for those with normal/low mtDNA levels, compared with 0% for those with elevated mtDNA levels.
Data source: A blinded prospective non-selection study of 280 blastocysts.
Disclosures: Reprogenetics provided funding for this study.
Pediatric Cancer Survivors at Increased Risk for Endocrine Abnormalities
Patients who survived pediatric-onset cancer are at increased risk for developing or experiencing endocrine abnormalities.
Risk was significantly higher in survivors who underwent high-risk therapeutic exposures compared with survivors not so exposed. Moreover, the incidence and prevalence of endocrine abnormalities increased across the lifespan of survivors, reported Sogol Mostoufi-Moab, MD, of University of Pennsylvania, Philadelphia, and his associates (J Clin Oncol. 2016 Jul. doi: 10.1200/JCO.2016.66.6545).
A total of 14,290 patients met the study’s eligibility requirements, which included a diagnosis of cancer before age 21 years and 5-year survival following diagnosis. Cancer diagnoses included leukemia, Hodgkin and non-Hodgkin lymphoma, Wilms tumor, neuroblastoma, sarcoma, bone malignancy, and central nervous system malignancy. Baseline and follow-up questionnaires collected endocrine-related outcomes of interest, demographic information, and medical histories for both cancer survivors and their siblings (n = 4,031). For survivors, median age at diagnosis was 6 years and median age at last follow-up was 32 years. For siblings, median age at last follow-up was 34 years.
Overall 44% of cancer survivors had at least one endocrinopathy, 16.7% had at least two, and 6.6% had three or more. Survivors of Hodgkin lymphoma had the highest frequency of endocrine abnormality (60.1%) followed by survivors of CNS malignancy (54%), leukemia (45.6%), sarcoma (41.3%), non-Hodgkin lymphoma (39.7%), and neuroblastoma (31.9%).
Specifically, thyroid disorders were more frequent among cancer survivors than among their siblings: underactive thyroid (hazard ratio, 2.2; 95% confidence interval, 1.8-2.7), overactive thyroid (HR, 2.4; 95% CI, 1.7-3.3), thyroid nodules (HR, 3.9; 95% CI, 2.9-5.4), and thyroid cancer (HR 2.5; 95% CI, 1.2-5.3).
Compared to their siblings, cancer survivors showed increased risk of developing diabetes (RR, 1.8; 95% CI, 1.4-2.3).
Among survivors, those exposed to high-risk therapies (defined by the Children’s Oncology Group’s Long-Term Follow-Up Guidelinesfor Survivors of Childhood, Adolescent, and Young Adult Cancers) were at a greater risk of developing primary hypothyroidism (HR, 6.6; 95% CI, 5.6-7.8) central hypothyroidism (HR, 3.9; 95% CI, 2.9-5.2), an overactive thyroid (HR, 1.8; 95% CI, 1.2-2.8), thyroid nodules (HR, 6.3; 95% CI, 5.2-7.5), and thyroid cancer (HR, 9.2; 95% CI, 6.2-13.7) compared with survivors not so exposed.
The National Cancer Institute, the Cancer Center Support Grant, and the American Lebanese Syrian Associated Charities of St. Jude Children’s Research Hospital funded the study. Dr. Mostoufi-Moab and nine other investigators had no disclosures to report. Two investigators reported receiving financial compensation or honoraria from Merck or Sandoz.
Patients who survived pediatric-onset cancer are at increased risk for developing or experiencing endocrine abnormalities.
Risk was significantly higher in survivors who underwent high-risk therapeutic exposures compared with survivors not so exposed. Moreover, the incidence and prevalence of endocrine abnormalities increased across the lifespan of survivors, reported Sogol Mostoufi-Moab, MD, of University of Pennsylvania, Philadelphia, and his associates (J Clin Oncol. 2016 Jul. doi: 10.1200/JCO.2016.66.6545).
A total of 14,290 patients met the study’s eligibility requirements, which included a diagnosis of cancer before age 21 years and 5-year survival following diagnosis. Cancer diagnoses included leukemia, Hodgkin and non-Hodgkin lymphoma, Wilms tumor, neuroblastoma, sarcoma, bone malignancy, and central nervous system malignancy. Baseline and follow-up questionnaires collected endocrine-related outcomes of interest, demographic information, and medical histories for both cancer survivors and their siblings (n = 4,031). For survivors, median age at diagnosis was 6 years and median age at last follow-up was 32 years. For siblings, median age at last follow-up was 34 years.
Overall 44% of cancer survivors had at least one endocrinopathy, 16.7% had at least two, and 6.6% had three or more. Survivors of Hodgkin lymphoma had the highest frequency of endocrine abnormality (60.1%) followed by survivors of CNS malignancy (54%), leukemia (45.6%), sarcoma (41.3%), non-Hodgkin lymphoma (39.7%), and neuroblastoma (31.9%).
Specifically, thyroid disorders were more frequent among cancer survivors than among their siblings: underactive thyroid (hazard ratio, 2.2; 95% confidence interval, 1.8-2.7), overactive thyroid (HR, 2.4; 95% CI, 1.7-3.3), thyroid nodules (HR, 3.9; 95% CI, 2.9-5.4), and thyroid cancer (HR 2.5; 95% CI, 1.2-5.3).
Compared to their siblings, cancer survivors showed increased risk of developing diabetes (RR, 1.8; 95% CI, 1.4-2.3).
Among survivors, those exposed to high-risk therapies (defined by the Children’s Oncology Group’s Long-Term Follow-Up Guidelinesfor Survivors of Childhood, Adolescent, and Young Adult Cancers) were at a greater risk of developing primary hypothyroidism (HR, 6.6; 95% CI, 5.6-7.8) central hypothyroidism (HR, 3.9; 95% CI, 2.9-5.2), an overactive thyroid (HR, 1.8; 95% CI, 1.2-2.8), thyroid nodules (HR, 6.3; 95% CI, 5.2-7.5), and thyroid cancer (HR, 9.2; 95% CI, 6.2-13.7) compared with survivors not so exposed.
The National Cancer Institute, the Cancer Center Support Grant, and the American Lebanese Syrian Associated Charities of St. Jude Children’s Research Hospital funded the study. Dr. Mostoufi-Moab and nine other investigators had no disclosures to report. Two investigators reported receiving financial compensation or honoraria from Merck or Sandoz.
Patients who survived pediatric-onset cancer are at increased risk for developing or experiencing endocrine abnormalities.
Risk was significantly higher in survivors who underwent high-risk therapeutic exposures compared with survivors not so exposed. Moreover, the incidence and prevalence of endocrine abnormalities increased across the lifespan of survivors, reported Sogol Mostoufi-Moab, MD, of University of Pennsylvania, Philadelphia, and his associates (J Clin Oncol. 2016 Jul. doi: 10.1200/JCO.2016.66.6545).
A total of 14,290 patients met the study’s eligibility requirements, which included a diagnosis of cancer before age 21 years and 5-year survival following diagnosis. Cancer diagnoses included leukemia, Hodgkin and non-Hodgkin lymphoma, Wilms tumor, neuroblastoma, sarcoma, bone malignancy, and central nervous system malignancy. Baseline and follow-up questionnaires collected endocrine-related outcomes of interest, demographic information, and medical histories for both cancer survivors and their siblings (n = 4,031). For survivors, median age at diagnosis was 6 years and median age at last follow-up was 32 years. For siblings, median age at last follow-up was 34 years.
Overall 44% of cancer survivors had at least one endocrinopathy, 16.7% had at least two, and 6.6% had three or more. Survivors of Hodgkin lymphoma had the highest frequency of endocrine abnormality (60.1%) followed by survivors of CNS malignancy (54%), leukemia (45.6%), sarcoma (41.3%), non-Hodgkin lymphoma (39.7%), and neuroblastoma (31.9%).
Specifically, thyroid disorders were more frequent among cancer survivors than among their siblings: underactive thyroid (hazard ratio, 2.2; 95% confidence interval, 1.8-2.7), overactive thyroid (HR, 2.4; 95% CI, 1.7-3.3), thyroid nodules (HR, 3.9; 95% CI, 2.9-5.4), and thyroid cancer (HR 2.5; 95% CI, 1.2-5.3).
Compared to their siblings, cancer survivors showed increased risk of developing diabetes (RR, 1.8; 95% CI, 1.4-2.3).
Among survivors, those exposed to high-risk therapies (defined by the Children’s Oncology Group’s Long-Term Follow-Up Guidelinesfor Survivors of Childhood, Adolescent, and Young Adult Cancers) were at a greater risk of developing primary hypothyroidism (HR, 6.6; 95% CI, 5.6-7.8) central hypothyroidism (HR, 3.9; 95% CI, 2.9-5.2), an overactive thyroid (HR, 1.8; 95% CI, 1.2-2.8), thyroid nodules (HR, 6.3; 95% CI, 5.2-7.5), and thyroid cancer (HR, 9.2; 95% CI, 6.2-13.7) compared with survivors not so exposed.
The National Cancer Institute, the Cancer Center Support Grant, and the American Lebanese Syrian Associated Charities of St. Jude Children’s Research Hospital funded the study. Dr. Mostoufi-Moab and nine other investigators had no disclosures to report. Two investigators reported receiving financial compensation or honoraria from Merck or Sandoz.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Pediatric cancer survivors at increased risk for endocrine abnormalities
Patients who survived pediatric-onset cancer are at increased risk for developing or experiencing endocrine abnormalities.
Risk was significantly higher in survivors who underwent high-risk therapeutic exposures compared with survivors not so exposed. Moreover, the incidence and prevalence of endocrine abnormalities increased across the lifespan of survivors, reported Sogol Mostoufi-Moab, MD, of University of Pennsylvania, Philadelphia, and his associates (J Clin Oncol. 2016 Jul. doi: 10.1200/JCO.2016.66.6545).
A total of 14,290 patients met the study’s eligibility requirements, which included a diagnosis of cancer before age 21 years and 5-year survival following diagnosis. Cancer diagnoses included leukemia, Hodgkin and non-Hodgkin lymphoma, Wilms tumor, neuroblastoma, sarcoma, bone malignancy, and central nervous system malignancy. Baseline and follow-up questionnaires collected endocrine-related outcomes of interest, demographic information, and medical histories for both cancer survivors and their siblings (n = 4,031). For survivors, median age at diagnosis was 6 years and median age at last follow-up was 32 years. For siblings, median age at last follow-up was 34 years.
Overall 44% of cancer survivors had at least one endocrinopathy, 16.7% had at least two, and 6.6% had three or more. Survivors of Hodgkin lymphoma had the highest frequency of endocrine abnormality (60.1%) followed by survivors of CNS malignancy (54%), leukemia (45.6%), sarcoma (41.3%), non-Hodgkin lymphoma (39.7%), and neuroblastoma (31.9%).
Specifically, thyroid disorders were more frequent among cancer survivors than among their siblings: underactive thyroid (hazard ratio, 2.2; 95% confidence interval, 1.8-2.7), overactive thyroid (HR, 2.4; 95% CI, 1.7-3.3), thyroid nodules (HR, 3.9; 95% CI, 2.9-5.4), and thyroid cancer (HR 2.5; 95% CI, 1.2-5.3).
Compared to their siblings, cancer survivors showed increased risk of developing diabetes (RR, 1.8; 95% CI, 1.4-2.3).
Among survivors, those exposed to high-risk therapies (defined by the Children’s Oncology Group’s Long-Term Follow-Up Guidelinesfor Survivors of Childhood, Adolescent, and Young Adult Cancers) were at a greater risk of developing primary hypothyroidism (HR, 6.6; 95% CI, 5.6-7.8) central hypothyroidism (HR, 3.9; 95% CI, 2.9-5.2), an overactive thyroid (HR, 1.8; 95% CI, 1.2-2.8), thyroid nodules (HR, 6.3; 95% CI, 5.2-7.5), and thyroid cancer (HR, 9.2; 95% CI, 6.2-13.7) compared with survivors not so exposed.
The National Cancer Institute, the Cancer Center Support Grant, and the American Lebanese Syrian Associated Charities of St. Jude Children’s Research Hospital funded the study. Dr. Mostoufi-Moab and nine other investigators had no disclosures to report. Two investigators reported receiving financial compensation or honoraria from Merck or Sandoz.
On Twitter @jessnicolecraig
Patients who survived pediatric-onset cancer are at increased risk for developing or experiencing endocrine abnormalities.
Risk was significantly higher in survivors who underwent high-risk therapeutic exposures compared with survivors not so exposed. Moreover, the incidence and prevalence of endocrine abnormalities increased across the lifespan of survivors, reported Sogol Mostoufi-Moab, MD, of University of Pennsylvania, Philadelphia, and his associates (J Clin Oncol. 2016 Jul. doi: 10.1200/JCO.2016.66.6545).
A total of 14,290 patients met the study’s eligibility requirements, which included a diagnosis of cancer before age 21 years and 5-year survival following diagnosis. Cancer diagnoses included leukemia, Hodgkin and non-Hodgkin lymphoma, Wilms tumor, neuroblastoma, sarcoma, bone malignancy, and central nervous system malignancy. Baseline and follow-up questionnaires collected endocrine-related outcomes of interest, demographic information, and medical histories for both cancer survivors and their siblings (n = 4,031). For survivors, median age at diagnosis was 6 years and median age at last follow-up was 32 years. For siblings, median age at last follow-up was 34 years.
Overall 44% of cancer survivors had at least one endocrinopathy, 16.7% had at least two, and 6.6% had three or more. Survivors of Hodgkin lymphoma had the highest frequency of endocrine abnormality (60.1%) followed by survivors of CNS malignancy (54%), leukemia (45.6%), sarcoma (41.3%), non-Hodgkin lymphoma (39.7%), and neuroblastoma (31.9%).
Specifically, thyroid disorders were more frequent among cancer survivors than among their siblings: underactive thyroid (hazard ratio, 2.2; 95% confidence interval, 1.8-2.7), overactive thyroid (HR, 2.4; 95% CI, 1.7-3.3), thyroid nodules (HR, 3.9; 95% CI, 2.9-5.4), and thyroid cancer (HR 2.5; 95% CI, 1.2-5.3).
Compared to their siblings, cancer survivors showed increased risk of developing diabetes (RR, 1.8; 95% CI, 1.4-2.3).
Among survivors, those exposed to high-risk therapies (defined by the Children’s Oncology Group’s Long-Term Follow-Up Guidelinesfor Survivors of Childhood, Adolescent, and Young Adult Cancers) were at a greater risk of developing primary hypothyroidism (HR, 6.6; 95% CI, 5.6-7.8) central hypothyroidism (HR, 3.9; 95% CI, 2.9-5.2), an overactive thyroid (HR, 1.8; 95% CI, 1.2-2.8), thyroid nodules (HR, 6.3; 95% CI, 5.2-7.5), and thyroid cancer (HR, 9.2; 95% CI, 6.2-13.7) compared with survivors not so exposed.
The National Cancer Institute, the Cancer Center Support Grant, and the American Lebanese Syrian Associated Charities of St. Jude Children’s Research Hospital funded the study. Dr. Mostoufi-Moab and nine other investigators had no disclosures to report. Two investigators reported receiving financial compensation or honoraria from Merck or Sandoz.
On Twitter @jessnicolecraig
Patients who survived pediatric-onset cancer are at increased risk for developing or experiencing endocrine abnormalities.
Risk was significantly higher in survivors who underwent high-risk therapeutic exposures compared with survivors not so exposed. Moreover, the incidence and prevalence of endocrine abnormalities increased across the lifespan of survivors, reported Sogol Mostoufi-Moab, MD, of University of Pennsylvania, Philadelphia, and his associates (J Clin Oncol. 2016 Jul. doi: 10.1200/JCO.2016.66.6545).
A total of 14,290 patients met the study’s eligibility requirements, which included a diagnosis of cancer before age 21 years and 5-year survival following diagnosis. Cancer diagnoses included leukemia, Hodgkin and non-Hodgkin lymphoma, Wilms tumor, neuroblastoma, sarcoma, bone malignancy, and central nervous system malignancy. Baseline and follow-up questionnaires collected endocrine-related outcomes of interest, demographic information, and medical histories for both cancer survivors and their siblings (n = 4,031). For survivors, median age at diagnosis was 6 years and median age at last follow-up was 32 years. For siblings, median age at last follow-up was 34 years.
Overall 44% of cancer survivors had at least one endocrinopathy, 16.7% had at least two, and 6.6% had three or more. Survivors of Hodgkin lymphoma had the highest frequency of endocrine abnormality (60.1%) followed by survivors of CNS malignancy (54%), leukemia (45.6%), sarcoma (41.3%), non-Hodgkin lymphoma (39.7%), and neuroblastoma (31.9%).
Specifically, thyroid disorders were more frequent among cancer survivors than among their siblings: underactive thyroid (hazard ratio, 2.2; 95% confidence interval, 1.8-2.7), overactive thyroid (HR, 2.4; 95% CI, 1.7-3.3), thyroid nodules (HR, 3.9; 95% CI, 2.9-5.4), and thyroid cancer (HR 2.5; 95% CI, 1.2-5.3).
Compared to their siblings, cancer survivors showed increased risk of developing diabetes (RR, 1.8; 95% CI, 1.4-2.3).
Among survivors, those exposed to high-risk therapies (defined by the Children’s Oncology Group’s Long-Term Follow-Up Guidelinesfor Survivors of Childhood, Adolescent, and Young Adult Cancers) were at a greater risk of developing primary hypothyroidism (HR, 6.6; 95% CI, 5.6-7.8) central hypothyroidism (HR, 3.9; 95% CI, 2.9-5.2), an overactive thyroid (HR, 1.8; 95% CI, 1.2-2.8), thyroid nodules (HR, 6.3; 95% CI, 5.2-7.5), and thyroid cancer (HR, 9.2; 95% CI, 6.2-13.7) compared with survivors not so exposed.
The National Cancer Institute, the Cancer Center Support Grant, and the American Lebanese Syrian Associated Charities of St. Jude Children’s Research Hospital funded the study. Dr. Mostoufi-Moab and nine other investigators had no disclosures to report. Two investigators reported receiving financial compensation or honoraria from Merck or Sandoz.
On Twitter @jessnicolecraig
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Key clinical point: Survivors of pediatric-onset cancer are at increased risk for developing endocrine abnormalities.
Major finding: Overall, 44% of childhood cancer survivors had at least one endocrinopathy. Survivors of Hodgkin lymphoma had the highest frequency of endocrine abnormality (60.1%) followed by survivors of CNS malignancy (54%), leukemia (45.6%), sarcoma (41.3%), non-Hodgkin lymphoma (39.7%), and neuroblastoma (31.9%).
Data source: A multi-institutional retrospective study of 14,290 men and women who survived pediatric cancer.
Disclosures: The National Cancer Institute, the Cancer Center Support Grant, and the American Lebanese Syrian Associated Charities of St. Jude Children’s Research Hospital funded the study. Dr. Mostoufi-Moab and nine other investigators had no disclosures to report. Two investigators reported receiving financial compensation or honoraria from Merck or Sandoz.
Emergency Ultrasound: Ultrasound-Guided Ulnar, Median, and Radial Nerve Blocks
Emergency physicians (EPs) have traditionally used the landmark technique to block the radial, ulnar, and median nerves at the wrist (Figure 1). Many times, however, there is a need to perform the block more proximally. Performing these blocks with real-time ultrasound guidance allows the clinician to visually target the nerve, requires less anesthetic agent, and helps to avoid vascular structures. As with any procedure, employing the appropriate technique, along with practice, increases the success of the block.
Patient Selection
Before performing a nerve block, the EP must first determine if the patient is an appropriate candidate. The EP should be cautious in performing a nerve block on any patient who has paresthesias, tingling, or weakness, as the block will complicate further examinations. Likewise, a nerve block may be contraindicated in a patient in whom compartment syndrome is a concern, since the analgesic effect will inhibit the patient’s ability to sense increasing pain or worsening paresthesias.
Equipment and Preprocedure Care
An ultrasound-guided nerve block is performed using the linear high-frequency probe. Prior to the procedure, standard infection-control measures should be taken—ie, thoroughly cleaning the preinjection site and using a transducer-probe cover. Regarding the choice of anesthetic, either bupivacaine or lidocaine is appropriate; however, bupivacaine will provide a longer duration of analgesia. To administer the anesthetic, we typically use a regular cutting needle or a spinal needle. A review of the literature typically suggests either noncutting needle tips or tips with short bevels. There is a paucity of data on needle tip selection. The use of noncutting needle tips or tips with short bevels may be a better choice than a regular cutting needle or a spinal needle because they may decrease the chance of intraneural injection and consequent nerve injury.
Single- Versus Two-Person Technique
Peripheral nerve blocks can be performed using either a single- or two-person technique. In the one-person technique, the operator manipulates both the probe and the syringe. The two-person technique, however, requires the addition of tubing between the needle and the syringe. This can be done with the addition of a small section of intravenous (IV) tubing or by connecting two pieces of tubing together (the type traditionally placed on IV catheters). The operator holds the needle and the probe while the syringe and injection are controlled by the second person. Then, with the ultrasound machine set at the nerve or soft-tissue presetting, the scan begins by placing the probe in a transverse orientation.
Nerve Location and Identification
As previously noted, the ulnar, median, and radial nerves have traditionally been identified through use of the landmark technique just proximal to the wrist. The nerves can be located initially at these sites and then traced proximally.
Ulnar Nerve
The ulnar nerve is located on the ulnar side of the forearm, just proximal to the wrist. (Figure 2a and 2b). The clinician should begin by fanning the probe at the wrist to find the ulnar artery and locate the nerve bundle. The ulnar nerve is also located on the ulnar side of the ulnar artery. The nerve will diverge from the path of the artery as it is traced proximally. To decrease the chance of an arterial injection/injury, the clinician should administer a nerve block after separating these two structures.
Median Nerve
The clinician can employ the landmark approach to help find the nerve; then the scan should begin at the carpal tunnel. On ultrasound, the tendons in the carpal tunnel will appear similar to nerves (ie, round and hyperechoic) compared to surrounding muscle. As one continues to slide the probe up the forearm, the tendons will become muscles and a single hyperechoic structure will remain—the median nerve running in between the flexor digitorum superificialis and the flexor digitorum profundus (Figure 3a and 3b). Since there is no artery alongside the median nerve, it can be traced proximally; therefore, the procedure can be performed in any convenient location.
Radial Nerve
Of the three nerves, the radial nerve is the most challenging to visualize on ultrasound. There are two approaches to performing a radial nerve block. In the first approach, the radial nerve can be found just proximal to the wrist crease on the radial side of the radial artery (Figure 4a and 4b). This nerve is typically much smaller and harder to visualize at this level; it can be traced proximally and the block performed at this location. In the second approach, the radial nerve can be located 3 to 4 cm proximal to the elbow with the probe located anterolaterally (Figure 5a and 5b). In this location, the radial nerve lies between the brachialis and the brachioradialis muscles. In this approach, the nerve is much larger and easier to visualize.
Performing the Block
Prior to performing an anesthetic block at the ulnar, median, or radial nerve at the wrist, the clinician should first place the patient in a sitting or supine position with the appropriate elbow extended. When performing the block at the radial nerve above the elbow, the hand is typically placed in a resting position on the patient’s abdomen. When localizing the nerve, the angle of the transducer can vary the appearance of the nerve dramatically. To ensure the best possible view, the clinician should slowly “rock” the probe back and forth 10° to 20° in plane with the long axis of the arm, making sure the probe is placed as perpendicular as possible to the nerve. Once the nerve is identified, the clinician can follow it up and down the forearm with the probe to identify the best site to perform the block. In the optimal location, there should be a clear path that is as superficial as possible and avoids any vascular structures. We prefer using an in-plane technique to perform the nerve block to visualize the entire needle as it approaches the nerve. Once the site has been determined, the clinician should slowly inject 4 to 5 cc of anesthetic around the nerve, with the objective to partially surround the nerve. There is no need to completely surround the nerve, as doing so is not necessary to achieve a successful block. The clinician should stop immediately if the patient reports pain or if there is increased resistance, because this could indicate an intraneural injection.
Summary
Ultrasound-guided peripheral nerve blocks are an excellent option for providing regional anesthesia to lacerations and wounds that are too large for a local anesthetic. This technique can provide better analgesic relief, enhancing patient care.
Emergency physicians (EPs) have traditionally used the landmark technique to block the radial, ulnar, and median nerves at the wrist (Figure 1). Many times, however, there is a need to perform the block more proximally. Performing these blocks with real-time ultrasound guidance allows the clinician to visually target the nerve, requires less anesthetic agent, and helps to avoid vascular structures. As with any procedure, employing the appropriate technique, along with practice, increases the success of the block.
Patient Selection
Before performing a nerve block, the EP must first determine if the patient is an appropriate candidate. The EP should be cautious in performing a nerve block on any patient who has paresthesias, tingling, or weakness, as the block will complicate further examinations. Likewise, a nerve block may be contraindicated in a patient in whom compartment syndrome is a concern, since the analgesic effect will inhibit the patient’s ability to sense increasing pain or worsening paresthesias.
Equipment and Preprocedure Care
An ultrasound-guided nerve block is performed using the linear high-frequency probe. Prior to the procedure, standard infection-control measures should be taken—ie, thoroughly cleaning the preinjection site and using a transducer-probe cover. Regarding the choice of anesthetic, either bupivacaine or lidocaine is appropriate; however, bupivacaine will provide a longer duration of analgesia. To administer the anesthetic, we typically use a regular cutting needle or a spinal needle. A review of the literature typically suggests either noncutting needle tips or tips with short bevels. There is a paucity of data on needle tip selection. The use of noncutting needle tips or tips with short bevels may be a better choice than a regular cutting needle or a spinal needle because they may decrease the chance of intraneural injection and consequent nerve injury.
Single- Versus Two-Person Technique
Peripheral nerve blocks can be performed using either a single- or two-person technique. In the one-person technique, the operator manipulates both the probe and the syringe. The two-person technique, however, requires the addition of tubing between the needle and the syringe. This can be done with the addition of a small section of intravenous (IV) tubing or by connecting two pieces of tubing together (the type traditionally placed on IV catheters). The operator holds the needle and the probe while the syringe and injection are controlled by the second person. Then, with the ultrasound machine set at the nerve or soft-tissue presetting, the scan begins by placing the probe in a transverse orientation.
Nerve Location and Identification
As previously noted, the ulnar, median, and radial nerves have traditionally been identified through use of the landmark technique just proximal to the wrist. The nerves can be located initially at these sites and then traced proximally.
Ulnar Nerve
The ulnar nerve is located on the ulnar side of the forearm, just proximal to the wrist. (Figure 2a and 2b). The clinician should begin by fanning the probe at the wrist to find the ulnar artery and locate the nerve bundle. The ulnar nerve is also located on the ulnar side of the ulnar artery. The nerve will diverge from the path of the artery as it is traced proximally. To decrease the chance of an arterial injection/injury, the clinician should administer a nerve block after separating these two structures.
Median Nerve
The clinician can employ the landmark approach to help find the nerve; then the scan should begin at the carpal tunnel. On ultrasound, the tendons in the carpal tunnel will appear similar to nerves (ie, round and hyperechoic) compared to surrounding muscle. As one continues to slide the probe up the forearm, the tendons will become muscles and a single hyperechoic structure will remain—the median nerve running in between the flexor digitorum superificialis and the flexor digitorum profundus (Figure 3a and 3b). Since there is no artery alongside the median nerve, it can be traced proximally; therefore, the procedure can be performed in any convenient location.
Radial Nerve
Of the three nerves, the radial nerve is the most challenging to visualize on ultrasound. There are two approaches to performing a radial nerve block. In the first approach, the radial nerve can be found just proximal to the wrist crease on the radial side of the radial artery (Figure 4a and 4b). This nerve is typically much smaller and harder to visualize at this level; it can be traced proximally and the block performed at this location. In the second approach, the radial nerve can be located 3 to 4 cm proximal to the elbow with the probe located anterolaterally (Figure 5a and 5b). In this location, the radial nerve lies between the brachialis and the brachioradialis muscles. In this approach, the nerve is much larger and easier to visualize.
Performing the Block
Prior to performing an anesthetic block at the ulnar, median, or radial nerve at the wrist, the clinician should first place the patient in a sitting or supine position with the appropriate elbow extended. When performing the block at the radial nerve above the elbow, the hand is typically placed in a resting position on the patient’s abdomen. When localizing the nerve, the angle of the transducer can vary the appearance of the nerve dramatically. To ensure the best possible view, the clinician should slowly “rock” the probe back and forth 10° to 20° in plane with the long axis of the arm, making sure the probe is placed as perpendicular as possible to the nerve. Once the nerve is identified, the clinician can follow it up and down the forearm with the probe to identify the best site to perform the block. In the optimal location, there should be a clear path that is as superficial as possible and avoids any vascular structures. We prefer using an in-plane technique to perform the nerve block to visualize the entire needle as it approaches the nerve. Once the site has been determined, the clinician should slowly inject 4 to 5 cc of anesthetic around the nerve, with the objective to partially surround the nerve. There is no need to completely surround the nerve, as doing so is not necessary to achieve a successful block. The clinician should stop immediately if the patient reports pain or if there is increased resistance, because this could indicate an intraneural injection.
Summary
Ultrasound-guided peripheral nerve blocks are an excellent option for providing regional anesthesia to lacerations and wounds that are too large for a local anesthetic. This technique can provide better analgesic relief, enhancing patient care.
Emergency physicians (EPs) have traditionally used the landmark technique to block the radial, ulnar, and median nerves at the wrist (Figure 1). Many times, however, there is a need to perform the block more proximally. Performing these blocks with real-time ultrasound guidance allows the clinician to visually target the nerve, requires less anesthetic agent, and helps to avoid vascular structures. As with any procedure, employing the appropriate technique, along with practice, increases the success of the block.
Patient Selection
Before performing a nerve block, the EP must first determine if the patient is an appropriate candidate. The EP should be cautious in performing a nerve block on any patient who has paresthesias, tingling, or weakness, as the block will complicate further examinations. Likewise, a nerve block may be contraindicated in a patient in whom compartment syndrome is a concern, since the analgesic effect will inhibit the patient’s ability to sense increasing pain or worsening paresthesias.
Equipment and Preprocedure Care
An ultrasound-guided nerve block is performed using the linear high-frequency probe. Prior to the procedure, standard infection-control measures should be taken—ie, thoroughly cleaning the preinjection site and using a transducer-probe cover. Regarding the choice of anesthetic, either bupivacaine or lidocaine is appropriate; however, bupivacaine will provide a longer duration of analgesia. To administer the anesthetic, we typically use a regular cutting needle or a spinal needle. A review of the literature typically suggests either noncutting needle tips or tips with short bevels. There is a paucity of data on needle tip selection. The use of noncutting needle tips or tips with short bevels may be a better choice than a regular cutting needle or a spinal needle because they may decrease the chance of intraneural injection and consequent nerve injury.
Single- Versus Two-Person Technique
Peripheral nerve blocks can be performed using either a single- or two-person technique. In the one-person technique, the operator manipulates both the probe and the syringe. The two-person technique, however, requires the addition of tubing between the needle and the syringe. This can be done with the addition of a small section of intravenous (IV) tubing or by connecting two pieces of tubing together (the type traditionally placed on IV catheters). The operator holds the needle and the probe while the syringe and injection are controlled by the second person. Then, with the ultrasound machine set at the nerve or soft-tissue presetting, the scan begins by placing the probe in a transverse orientation.
Nerve Location and Identification
As previously noted, the ulnar, median, and radial nerves have traditionally been identified through use of the landmark technique just proximal to the wrist. The nerves can be located initially at these sites and then traced proximally.
Ulnar Nerve
The ulnar nerve is located on the ulnar side of the forearm, just proximal to the wrist. (Figure 2a and 2b). The clinician should begin by fanning the probe at the wrist to find the ulnar artery and locate the nerve bundle. The ulnar nerve is also located on the ulnar side of the ulnar artery. The nerve will diverge from the path of the artery as it is traced proximally. To decrease the chance of an arterial injection/injury, the clinician should administer a nerve block after separating these two structures.
Median Nerve
The clinician can employ the landmark approach to help find the nerve; then the scan should begin at the carpal tunnel. On ultrasound, the tendons in the carpal tunnel will appear similar to nerves (ie, round and hyperechoic) compared to surrounding muscle. As one continues to slide the probe up the forearm, the tendons will become muscles and a single hyperechoic structure will remain—the median nerve running in between the flexor digitorum superificialis and the flexor digitorum profundus (Figure 3a and 3b). Since there is no artery alongside the median nerve, it can be traced proximally; therefore, the procedure can be performed in any convenient location.
Radial Nerve
Of the three nerves, the radial nerve is the most challenging to visualize on ultrasound. There are two approaches to performing a radial nerve block. In the first approach, the radial nerve can be found just proximal to the wrist crease on the radial side of the radial artery (Figure 4a and 4b). This nerve is typically much smaller and harder to visualize at this level; it can be traced proximally and the block performed at this location. In the second approach, the radial nerve can be located 3 to 4 cm proximal to the elbow with the probe located anterolaterally (Figure 5a and 5b). In this location, the radial nerve lies between the brachialis and the brachioradialis muscles. In this approach, the nerve is much larger and easier to visualize.
Performing the Block
Prior to performing an anesthetic block at the ulnar, median, or radial nerve at the wrist, the clinician should first place the patient in a sitting or supine position with the appropriate elbow extended. When performing the block at the radial nerve above the elbow, the hand is typically placed in a resting position on the patient’s abdomen. When localizing the nerve, the angle of the transducer can vary the appearance of the nerve dramatically. To ensure the best possible view, the clinician should slowly “rock” the probe back and forth 10° to 20° in plane with the long axis of the arm, making sure the probe is placed as perpendicular as possible to the nerve. Once the nerve is identified, the clinician can follow it up and down the forearm with the probe to identify the best site to perform the block. In the optimal location, there should be a clear path that is as superficial as possible and avoids any vascular structures. We prefer using an in-plane technique to perform the nerve block to visualize the entire needle as it approaches the nerve. Once the site has been determined, the clinician should slowly inject 4 to 5 cc of anesthetic around the nerve, with the objective to partially surround the nerve. There is no need to completely surround the nerve, as doing so is not necessary to achieve a successful block. The clinician should stop immediately if the patient reports pain or if there is increased resistance, because this could indicate an intraneural injection.
Summary
Ultrasound-guided peripheral nerve blocks are an excellent option for providing regional anesthesia to lacerations and wounds that are too large for a local anesthetic. This technique can provide better analgesic relief, enhancing patient care.
Does Optic Nerve Sheath Diameter Ultrasonography Permit Accurate Detection of Real-Time Changes in ICP?
Case Scenarios
Case 1
While working abroad in a resource-limited environment, a patient was brought in after falling and hitting his head. Initially, the patient was awake and alert, but he gradually became minimally responsive, with a Glasgow Coma Scale score of 9. Your facility did not have computed tomography (CT) or magnetic resonance imaging (MRI), but did have a point-of-care ultrasound (US) machine. You measured the patient’s optic nerve sheath diameter (ONSD) with the US and found a diameter of 4.5 mm in each eye. With this clinical change, you wondered if repeat US scans to detect increasing intracranial pressure (ICP) would represent changes in the patient’s condition.
Case 2
A patient who presented with an intracranial hemorrhage was treated with hypertonic saline and was awaiting neurosurgical placement of an extraventicular drain. During this time, a resident who was on a US rotation asked you if she would be able to detect changes in the patient’s ICP using US rather than placing an invasive device. How do you respond?
In adults, ICP is normally 10 to 15 mm Hg. It may be pathologically increased in several life-threatening conditions, including traumatic brain injury (TBI), subarachnoid hemorrhage, central venous thrombosis, brain tumor, and abscess. It is also increased by nonacute pathology, such as idiopathic intracranial hypertension (IIH), which also is known as pseudotumor cerebri. In patients with acute pathology, ICP above 20 mm Hg is generally considered an indication for treatment.1 Indications for ICP monitoring in TBI include positive CT findings, patient age greater than 40 years, systemic hypotension, or abnormal flexion/extension in response to pain.2 Other reasons to monitor ICP include the management of pseudotumor cerebri or after ventriculoperitoneal shunt surgery.3
Unfortunately, current methods of ICP monitoring have significant drawbacks and limitations. The gold standard of ICP monitoring—measurement using an intraventricular catheter—increases the risks of infection and hemorrhage, requires the skill of a neurosurgeon, and may be contraindicated due to coagulopathy or thrombocytopenia. It also cannot be done in a prehospital setting and only to a limited extent in the ED.4
Computed tomography scans and MRI can assess elevated ICP, but these tests are expensive, may increase patient radiation exposure, require patient transport, and may not always detect raised ICP. In the appropriate clinical context, signs present on physical examination, such as decorticate/decerebrate posturing, papilledema, or fixed/dilated pupils, may be highly suggestive of an increased ICP, but sensitivity and specificity are inadequate. Delay in diagnosis is also a drawback of imaging and physical examination, as findings may not present until ICP has been persistently elevated.
Given the disadvantages of current means of assessing elevated ICP, several noninvasive methods of measuring ICP are being investigated. These include such techniques as transcranial Doppler, electroencephalogram, pupillometry, and ONSD measurements.5 This article reviews current applications of ultrasonography measurements of the ONSD in assessing elevations in ICP.
ONSD US
Assessment of ICP via measurement of the ONSD has attracted increasing attention, particularly in emergency medicine. Measurements of the ONSD are possible with CT, MRI, and US. Of these modalities, ONSD US has attracted the most interest, due to its low cost, wide availability, and rapidity. It does not require patient transport, and does not expose a patient to additional radiation. In addition, ONSD US has been utilized in low-resource settings, and may be particularly useful in prehospital and mass-casualty situations.6
The underlying relationship between ONSD and ICP is a result of the enclosure of the subarachnoid space by the ONS. Increased ICP leads to expansion of the ONS, particularly at 3 mm behind the globe, in the retrobulbar compartment (Figures 1 and 2).7
Unfortunately, it is not possible to precisely determine ICP from an ONSD measurement, because baseline ONSD values and elasticity vary significantly within the population.4,8 As a result, ONSD US has been investigated mostly for its ability to detect qualitative changes—particularly as a screen for elevated ICP. Optic nerve sheath diameter has high discriminative value in its ability to distinguish normal from elevated ICP. In a meta-analysis, Dubourg et al9 showed that the technique had an area under the summary receiver-operating curve of 0.94, signifying excellent test accuracy to diagnose elevated ICPs.
Researchers have attempted to determine a threshold value of ONSD that would serve as a clinically useful predictor of elevated ICP. Currently, this value ranges from 4.8 to 5.9 mm, depending on the study9; 5 mm is commonly used clinically as a threshold.10
Using ONSD US to Monitor Rapid Changes in ICP
While the use of the ONSD technique to screen for elevated ICP is relatively well established, the use of ONSD US to track acute changes in ICP is not as well studied. Serial tracking of acute changes could be useful in a patient at risk for intracranial hypertension secondary to trauma, to monitor the results of treating a patient with IIH, or after ventriculoperitoneal shunt placement.3
In Vivo Data
In 1993, Tamburrelli et al11 performed the first ONSD intrathecal infusion study, using A-scan sonography, and concluded that there was a “direct, biphasic, positive relation between diastolic intracranial pressure and optic nerve diameters” and that the data showed “rapid changes of optic nerve diameters in response to variation of intracranial pressure.”
In 1997, Hansen and Helmke12 recorded ONSD versus ICP data in the first intrathecal infusion test to use B-scan mode sonography. Ultrasonography was performed at 2- to 4-minute intervals. Their data demonstrated a linear relationship between ICP and ONSD over a particular cerebrospinal fluid pressure interval. They noted that “this interval differed between patients: ONS dilation commenced at pressure thresholds between 15 mm Hg and 30 mm Hg and in some patients saturation of the response (constant ONSD) occurred between 30 mm Hg and 40 mm Hg.”
The slope of ONSD versus ICP curve varied considerably by patient, making it impossible to infer an absolute ICP value from an ONSD without prior knowledge of the patient’s ratio. Similar to the data from Tamburrelli et al,11 Hansen and Helmke12 also found that there was no lag in ONSD response to ICP: “Within this interval, no temporal delay of the ONS response was noted.”
The only study comparing real-time ONSD data to gold-standard measurements of rapidly changing ICP in humans was performed by Maissan et al13 in 2015. This study involved a cohort of 18 patients who had suffered TBI and had intraparenchymal probes inserted. Because ICP rises transiently during endotracheal tube suctioning due to irritation of the trachea, the increase and subsequent decrease after suctioning was an ideal time to perform ONSD measurements and compare them to simultaneous gold-standard ICP measurements. The ONSD US measurements were performed 30 to 60 seconds prior to suctioning, during suctioning, and 30 to 60 seconds after suctioning.
Even during this very rapid time course, a strong correlation between ICP and ONSD measurements was demonstrated. The R2 value was 0.80. There was no perceptible “lag” in ONSD change; changes in ICP were immediately reflected in ONSD. Notably, an absolute change of less than 8 to 10 mm Hg in ICP did not affect ONSD, which is consistent with data collected by Hansen and Helmke.12
Therapeutic Lumbar Puncture for IIH
There are two case reports of ONSD US measurements being taken pre- and postlumbar puncture (LP) in patients with IIH. In the first, in 1989 Galetta et al14 used A-scan US to measure pre- and post-LP ONSD in a woman with papilledema secondary to IIH. They found a significant reduction in ONSD bilaterally “within minutes” of performing the LP.14
The second case report was published in 2015 by Singleton et al.15 They recorded ONSD measurements 30 minutes pre- and post-LP in a woman who presented to the ED with symptoms from elevated ICP. After reduction of pressure via LP, they recorded a significant reduction in ONSD bilaterally.15
Cadaver Data
Hansen et al16 evaluated the distensibility and elasticity of the ONS using postmortem optic nerve preparations. The ONSD was recorded 200 seconds after each pressure increase, which was long enough to achieve stable diameters. They found a linear correlation between pressure increases of 5 to 45 mm Hg and ONSD. This would suggest a potential positively correlated change in ONSD with in vivo changes in ICP. However, this still needs further clinical study to better assess measurable changes in living patients.
Conclusion
Published data have consistently demonstrated that changes in ICP are rapidly transmitted to the optic nerve sheath and that there does not appear to be a temporal lag in the ONSD. Based on in vivo data, the relationship between ICP and ONSD appears to be linear only over a range of moderately elevated ICP. According to Hansen and Helmke,12 this range starts at approximately 18 to 30 mm Hg, and ends at approximately 40 to 45 mm Hg. Maissan et al13 observed similar findings: “At low levels, ICP changes (8-10 mm Hg) do not affect the ONSD.”
There is still need for additional research to validate and refine these findings. Only one study has compared gold-standard ICP measurements with ONSD US measurements in real time,13 and the literature on ONSD US in tracking ICP after therapeutic LP in IIH consists of only two case reports.
Thus, with some caveats, ONSD US appears to permit qualitative tracking of ICP in real time. This supports its use in situations where a patient may have rapidly changing ICP, such as close monitoring of patients at risk for elevated ICP in a critical care setting, and response to treatment in patients with IIH.
1. Stocchetti N, Maas AI. Traumatic intracranial hypertension. N Engl J Med. 2014;370(22):2121-2130.
2. Brain Trauma Foundation; American Association of Neurological Surgeons; Congress of Neurological Surgeons; et al. Guidelines for the management of severe traumatic brain injury. VI. Indications for intracranial pressure monitoring. J Neurotrauma. 2007;24(Suppl 1):S37-S44.
3. Choi SH, Min KT, Park EK, Kim MS, Jung JH, Kim H. Ultrasonography of the optic nerve sheath to assess intracranial pressure changes after ventriculo-peritoneal shunt surgery in children with hydrocephalus: a prospective observational study. Anaesthesia. 2015;70(11):1268-1273.
4. Kristiansson H, Nissborg E, Bartek J Jr, Andresen M, Reinstrup P, Romner B. Measuring elevated intracranial pressure through noninvasive methods: a review of the literature. J Neurosurg Anesthesiol. 2013;25(4):372-385.
5. Rajajee V, Thyagarajan P, Rajagopalan RE. Optic nerve ultrasonography for detection of raised intracranial pressure when invasive monitoring is unavailable. Neurol India. 2010;58(5):812-813.
6. Robba C, Baciqaluppi S, Cardim D, Donnelly J, Bertuccio A, Czosnyka M. Non-invasive assessment of intracranial pressure. Acta Neurol Scand. 2016;134(1):4-21.
7. Hansen HC, Helmke K. The subarachnoid space surrounding the optic nerves. An ultrasound study of the optic nerve sheath. Surg Radiol Anat. 1996;18(4):323-328.
8. Hansen HC, Lagrèze W, Krueger O, Helmke K. Dependence of the optic nerve sheath diameter on acutely applied subarachnoidal pressure - an experimental ultrasound study. Acta Ophthalmol. 2011;89(6):e528-e532.
9. Dubourg J, Javouhey E, Geeraerts T, Messerer M, Kassai B. Ultrasonography of optic nerve sheath diameter for detection of raised intracranial pressure: a systematic review and meta-analysis. Intensive Care Med. 2011;37(7):1059-1068.
10. Kimberly HH, Shah S, Marill K, Noble V. Correlation of optic nerve sheath diameter with direct measurement of intracranial pressure. Acad Emerg Med. 2008;15(2):201-204.
11. Tamburrelli C, Anile C, Mangiola A, Falsini B, Palma P. CSF dynamic parameters and changes of optic nerve diameters measured by standardized echography. In: Till P, ed. Ophthalmic Echography 13: Proceedings of the 13th SIDUO Congress, Vienna, Austria, 1990; vol 55. Dordrecht, Netherlands: Springer Netherlands; 1993:101-109.
12. Hansen HC, Helmke K. Validation of the optic nerve sheath response to changing cerebrospinal fluid pressure: ultrasound findings during intrathecal infusion tests. J Neurosurg. 1997;87(1):34-40.
13. Maissan IM, Dirven PJ, Haitsma IK, Hoeks SE, Gommers D, Stolker RJ. Ultrasonographic measured optic nerve sheath diameter as an accurate and quick monitor for changes in intracranial pressure. J Neurosurg. 2015;123(3)743-747.
14. Galetta S, Byrne SF, Smith JL. Echographic correlation of optic nerve sheath size and cerebrospinal fluid pressure. J Clin Neuroophthalmol. 1989;9(2):79-82.
15. Singleton J, Dagan A, Edlow JA, Hoffmann B. Real-time optic nerve sheath diameter reduction measured with bedside ultrasound after therapeutic lumbar puncture in a patient with idiopathic intracranial hypertension. Am J Emerg Med. 2015;33(6):860.e5-e7.
16. Hansen HC, Lagrèze W, Krueger O, Helmke K. Dependence of the optic nerve sheath diameter on acutely applied subarachnoidal pressure—an experimental ultrasound study. Acta Ophthalmol. 2011;89(6):e528-e532.
Case Scenarios
Case 1
While working abroad in a resource-limited environment, a patient was brought in after falling and hitting his head. Initially, the patient was awake and alert, but he gradually became minimally responsive, with a Glasgow Coma Scale score of 9. Your facility did not have computed tomography (CT) or magnetic resonance imaging (MRI), but did have a point-of-care ultrasound (US) machine. You measured the patient’s optic nerve sheath diameter (ONSD) with the US and found a diameter of 4.5 mm in each eye. With this clinical change, you wondered if repeat US scans to detect increasing intracranial pressure (ICP) would represent changes in the patient’s condition.
Case 2
A patient who presented with an intracranial hemorrhage was treated with hypertonic saline and was awaiting neurosurgical placement of an extraventicular drain. During this time, a resident who was on a US rotation asked you if she would be able to detect changes in the patient’s ICP using US rather than placing an invasive device. How do you respond?
In adults, ICP is normally 10 to 15 mm Hg. It may be pathologically increased in several life-threatening conditions, including traumatic brain injury (TBI), subarachnoid hemorrhage, central venous thrombosis, brain tumor, and abscess. It is also increased by nonacute pathology, such as idiopathic intracranial hypertension (IIH), which also is known as pseudotumor cerebri. In patients with acute pathology, ICP above 20 mm Hg is generally considered an indication for treatment.1 Indications for ICP monitoring in TBI include positive CT findings, patient age greater than 40 years, systemic hypotension, or abnormal flexion/extension in response to pain.2 Other reasons to monitor ICP include the management of pseudotumor cerebri or after ventriculoperitoneal shunt surgery.3
Unfortunately, current methods of ICP monitoring have significant drawbacks and limitations. The gold standard of ICP monitoring—measurement using an intraventricular catheter—increases the risks of infection and hemorrhage, requires the skill of a neurosurgeon, and may be contraindicated due to coagulopathy or thrombocytopenia. It also cannot be done in a prehospital setting and only to a limited extent in the ED.4
Computed tomography scans and MRI can assess elevated ICP, but these tests are expensive, may increase patient radiation exposure, require patient transport, and may not always detect raised ICP. In the appropriate clinical context, signs present on physical examination, such as decorticate/decerebrate posturing, papilledema, or fixed/dilated pupils, may be highly suggestive of an increased ICP, but sensitivity and specificity are inadequate. Delay in diagnosis is also a drawback of imaging and physical examination, as findings may not present until ICP has been persistently elevated.
Given the disadvantages of current means of assessing elevated ICP, several noninvasive methods of measuring ICP are being investigated. These include such techniques as transcranial Doppler, electroencephalogram, pupillometry, and ONSD measurements.5 This article reviews current applications of ultrasonography measurements of the ONSD in assessing elevations in ICP.
ONSD US
Assessment of ICP via measurement of the ONSD has attracted increasing attention, particularly in emergency medicine. Measurements of the ONSD are possible with CT, MRI, and US. Of these modalities, ONSD US has attracted the most interest, due to its low cost, wide availability, and rapidity. It does not require patient transport, and does not expose a patient to additional radiation. In addition, ONSD US has been utilized in low-resource settings, and may be particularly useful in prehospital and mass-casualty situations.6
The underlying relationship between ONSD and ICP is a result of the enclosure of the subarachnoid space by the ONS. Increased ICP leads to expansion of the ONS, particularly at 3 mm behind the globe, in the retrobulbar compartment (Figures 1 and 2).7
Unfortunately, it is not possible to precisely determine ICP from an ONSD measurement, because baseline ONSD values and elasticity vary significantly within the population.4,8 As a result, ONSD US has been investigated mostly for its ability to detect qualitative changes—particularly as a screen for elevated ICP. Optic nerve sheath diameter has high discriminative value in its ability to distinguish normal from elevated ICP. In a meta-analysis, Dubourg et al9 showed that the technique had an area under the summary receiver-operating curve of 0.94, signifying excellent test accuracy to diagnose elevated ICPs.
Researchers have attempted to determine a threshold value of ONSD that would serve as a clinically useful predictor of elevated ICP. Currently, this value ranges from 4.8 to 5.9 mm, depending on the study9; 5 mm is commonly used clinically as a threshold.10
Using ONSD US to Monitor Rapid Changes in ICP
While the use of the ONSD technique to screen for elevated ICP is relatively well established, the use of ONSD US to track acute changes in ICP is not as well studied. Serial tracking of acute changes could be useful in a patient at risk for intracranial hypertension secondary to trauma, to monitor the results of treating a patient with IIH, or after ventriculoperitoneal shunt placement.3
In Vivo Data
In 1993, Tamburrelli et al11 performed the first ONSD intrathecal infusion study, using A-scan sonography, and concluded that there was a “direct, biphasic, positive relation between diastolic intracranial pressure and optic nerve diameters” and that the data showed “rapid changes of optic nerve diameters in response to variation of intracranial pressure.”
In 1997, Hansen and Helmke12 recorded ONSD versus ICP data in the first intrathecal infusion test to use B-scan mode sonography. Ultrasonography was performed at 2- to 4-minute intervals. Their data demonstrated a linear relationship between ICP and ONSD over a particular cerebrospinal fluid pressure interval. They noted that “this interval differed between patients: ONS dilation commenced at pressure thresholds between 15 mm Hg and 30 mm Hg and in some patients saturation of the response (constant ONSD) occurred between 30 mm Hg and 40 mm Hg.”
The slope of ONSD versus ICP curve varied considerably by patient, making it impossible to infer an absolute ICP value from an ONSD without prior knowledge of the patient’s ratio. Similar to the data from Tamburrelli et al,11 Hansen and Helmke12 also found that there was no lag in ONSD response to ICP: “Within this interval, no temporal delay of the ONS response was noted.”
The only study comparing real-time ONSD data to gold-standard measurements of rapidly changing ICP in humans was performed by Maissan et al13 in 2015. This study involved a cohort of 18 patients who had suffered TBI and had intraparenchymal probes inserted. Because ICP rises transiently during endotracheal tube suctioning due to irritation of the trachea, the increase and subsequent decrease after suctioning was an ideal time to perform ONSD measurements and compare them to simultaneous gold-standard ICP measurements. The ONSD US measurements were performed 30 to 60 seconds prior to suctioning, during suctioning, and 30 to 60 seconds after suctioning.
Even during this very rapid time course, a strong correlation between ICP and ONSD measurements was demonstrated. The R2 value was 0.80. There was no perceptible “lag” in ONSD change; changes in ICP were immediately reflected in ONSD. Notably, an absolute change of less than 8 to 10 mm Hg in ICP did not affect ONSD, which is consistent with data collected by Hansen and Helmke.12
Therapeutic Lumbar Puncture for IIH
There are two case reports of ONSD US measurements being taken pre- and postlumbar puncture (LP) in patients with IIH. In the first, in 1989 Galetta et al14 used A-scan US to measure pre- and post-LP ONSD in a woman with papilledema secondary to IIH. They found a significant reduction in ONSD bilaterally “within minutes” of performing the LP.14
The second case report was published in 2015 by Singleton et al.15 They recorded ONSD measurements 30 minutes pre- and post-LP in a woman who presented to the ED with symptoms from elevated ICP. After reduction of pressure via LP, they recorded a significant reduction in ONSD bilaterally.15
Cadaver Data
Hansen et al16 evaluated the distensibility and elasticity of the ONS using postmortem optic nerve preparations. The ONSD was recorded 200 seconds after each pressure increase, which was long enough to achieve stable diameters. They found a linear correlation between pressure increases of 5 to 45 mm Hg and ONSD. This would suggest a potential positively correlated change in ONSD with in vivo changes in ICP. However, this still needs further clinical study to better assess measurable changes in living patients.
Conclusion
Published data have consistently demonstrated that changes in ICP are rapidly transmitted to the optic nerve sheath and that there does not appear to be a temporal lag in the ONSD. Based on in vivo data, the relationship between ICP and ONSD appears to be linear only over a range of moderately elevated ICP. According to Hansen and Helmke,12 this range starts at approximately 18 to 30 mm Hg, and ends at approximately 40 to 45 mm Hg. Maissan et al13 observed similar findings: “At low levels, ICP changes (8-10 mm Hg) do not affect the ONSD.”
There is still need for additional research to validate and refine these findings. Only one study has compared gold-standard ICP measurements with ONSD US measurements in real time,13 and the literature on ONSD US in tracking ICP after therapeutic LP in IIH consists of only two case reports.
Thus, with some caveats, ONSD US appears to permit qualitative tracking of ICP in real time. This supports its use in situations where a patient may have rapidly changing ICP, such as close monitoring of patients at risk for elevated ICP in a critical care setting, and response to treatment in patients with IIH.
Case Scenarios
Case 1
While working abroad in a resource-limited environment, a patient was brought in after falling and hitting his head. Initially, the patient was awake and alert, but he gradually became minimally responsive, with a Glasgow Coma Scale score of 9. Your facility did not have computed tomography (CT) or magnetic resonance imaging (MRI), but did have a point-of-care ultrasound (US) machine. You measured the patient’s optic nerve sheath diameter (ONSD) with the US and found a diameter of 4.5 mm in each eye. With this clinical change, you wondered if repeat US scans to detect increasing intracranial pressure (ICP) would represent changes in the patient’s condition.
Case 2
A patient who presented with an intracranial hemorrhage was treated with hypertonic saline and was awaiting neurosurgical placement of an extraventicular drain. During this time, a resident who was on a US rotation asked you if she would be able to detect changes in the patient’s ICP using US rather than placing an invasive device. How do you respond?
In adults, ICP is normally 10 to 15 mm Hg. It may be pathologically increased in several life-threatening conditions, including traumatic brain injury (TBI), subarachnoid hemorrhage, central venous thrombosis, brain tumor, and abscess. It is also increased by nonacute pathology, such as idiopathic intracranial hypertension (IIH), which also is known as pseudotumor cerebri. In patients with acute pathology, ICP above 20 mm Hg is generally considered an indication for treatment.1 Indications for ICP monitoring in TBI include positive CT findings, patient age greater than 40 years, systemic hypotension, or abnormal flexion/extension in response to pain.2 Other reasons to monitor ICP include the management of pseudotumor cerebri or after ventriculoperitoneal shunt surgery.3
Unfortunately, current methods of ICP monitoring have significant drawbacks and limitations. The gold standard of ICP monitoring—measurement using an intraventricular catheter—increases the risks of infection and hemorrhage, requires the skill of a neurosurgeon, and may be contraindicated due to coagulopathy or thrombocytopenia. It also cannot be done in a prehospital setting and only to a limited extent in the ED.4
Computed tomography scans and MRI can assess elevated ICP, but these tests are expensive, may increase patient radiation exposure, require patient transport, and may not always detect raised ICP. In the appropriate clinical context, signs present on physical examination, such as decorticate/decerebrate posturing, papilledema, or fixed/dilated pupils, may be highly suggestive of an increased ICP, but sensitivity and specificity are inadequate. Delay in diagnosis is also a drawback of imaging and physical examination, as findings may not present until ICP has been persistently elevated.
Given the disadvantages of current means of assessing elevated ICP, several noninvasive methods of measuring ICP are being investigated. These include such techniques as transcranial Doppler, electroencephalogram, pupillometry, and ONSD measurements.5 This article reviews current applications of ultrasonography measurements of the ONSD in assessing elevations in ICP.
ONSD US
Assessment of ICP via measurement of the ONSD has attracted increasing attention, particularly in emergency medicine. Measurements of the ONSD are possible with CT, MRI, and US. Of these modalities, ONSD US has attracted the most interest, due to its low cost, wide availability, and rapidity. It does not require patient transport, and does not expose a patient to additional radiation. In addition, ONSD US has been utilized in low-resource settings, and may be particularly useful in prehospital and mass-casualty situations.6
The underlying relationship between ONSD and ICP is a result of the enclosure of the subarachnoid space by the ONS. Increased ICP leads to expansion of the ONS, particularly at 3 mm behind the globe, in the retrobulbar compartment (Figures 1 and 2).7
Unfortunately, it is not possible to precisely determine ICP from an ONSD measurement, because baseline ONSD values and elasticity vary significantly within the population.4,8 As a result, ONSD US has been investigated mostly for its ability to detect qualitative changes—particularly as a screen for elevated ICP. Optic nerve sheath diameter has high discriminative value in its ability to distinguish normal from elevated ICP. In a meta-analysis, Dubourg et al9 showed that the technique had an area under the summary receiver-operating curve of 0.94, signifying excellent test accuracy to diagnose elevated ICPs.
Researchers have attempted to determine a threshold value of ONSD that would serve as a clinically useful predictor of elevated ICP. Currently, this value ranges from 4.8 to 5.9 mm, depending on the study9; 5 mm is commonly used clinically as a threshold.10
Using ONSD US to Monitor Rapid Changes in ICP
While the use of the ONSD technique to screen for elevated ICP is relatively well established, the use of ONSD US to track acute changes in ICP is not as well studied. Serial tracking of acute changes could be useful in a patient at risk for intracranial hypertension secondary to trauma, to monitor the results of treating a patient with IIH, or after ventriculoperitoneal shunt placement.3
In Vivo Data
In 1993, Tamburrelli et al11 performed the first ONSD intrathecal infusion study, using A-scan sonography, and concluded that there was a “direct, biphasic, positive relation between diastolic intracranial pressure and optic nerve diameters” and that the data showed “rapid changes of optic nerve diameters in response to variation of intracranial pressure.”
In 1997, Hansen and Helmke12 recorded ONSD versus ICP data in the first intrathecal infusion test to use B-scan mode sonography. Ultrasonography was performed at 2- to 4-minute intervals. Their data demonstrated a linear relationship between ICP and ONSD over a particular cerebrospinal fluid pressure interval. They noted that “this interval differed between patients: ONS dilation commenced at pressure thresholds between 15 mm Hg and 30 mm Hg and in some patients saturation of the response (constant ONSD) occurred between 30 mm Hg and 40 mm Hg.”
The slope of ONSD versus ICP curve varied considerably by patient, making it impossible to infer an absolute ICP value from an ONSD without prior knowledge of the patient’s ratio. Similar to the data from Tamburrelli et al,11 Hansen and Helmke12 also found that there was no lag in ONSD response to ICP: “Within this interval, no temporal delay of the ONS response was noted.”
The only study comparing real-time ONSD data to gold-standard measurements of rapidly changing ICP in humans was performed by Maissan et al13 in 2015. This study involved a cohort of 18 patients who had suffered TBI and had intraparenchymal probes inserted. Because ICP rises transiently during endotracheal tube suctioning due to irritation of the trachea, the increase and subsequent decrease after suctioning was an ideal time to perform ONSD measurements and compare them to simultaneous gold-standard ICP measurements. The ONSD US measurements were performed 30 to 60 seconds prior to suctioning, during suctioning, and 30 to 60 seconds after suctioning.
Even during this very rapid time course, a strong correlation between ICP and ONSD measurements was demonstrated. The R2 value was 0.80. There was no perceptible “lag” in ONSD change; changes in ICP were immediately reflected in ONSD. Notably, an absolute change of less than 8 to 10 mm Hg in ICP did not affect ONSD, which is consistent with data collected by Hansen and Helmke.12
Therapeutic Lumbar Puncture for IIH
There are two case reports of ONSD US measurements being taken pre- and postlumbar puncture (LP) in patients with IIH. In the first, in 1989 Galetta et al14 used A-scan US to measure pre- and post-LP ONSD in a woman with papilledema secondary to IIH. They found a significant reduction in ONSD bilaterally “within minutes” of performing the LP.14
The second case report was published in 2015 by Singleton et al.15 They recorded ONSD measurements 30 minutes pre- and post-LP in a woman who presented to the ED with symptoms from elevated ICP. After reduction of pressure via LP, they recorded a significant reduction in ONSD bilaterally.15
Cadaver Data
Hansen et al16 evaluated the distensibility and elasticity of the ONS using postmortem optic nerve preparations. The ONSD was recorded 200 seconds after each pressure increase, which was long enough to achieve stable diameters. They found a linear correlation between pressure increases of 5 to 45 mm Hg and ONSD. This would suggest a potential positively correlated change in ONSD with in vivo changes in ICP. However, this still needs further clinical study to better assess measurable changes in living patients.
Conclusion
Published data have consistently demonstrated that changes in ICP are rapidly transmitted to the optic nerve sheath and that there does not appear to be a temporal lag in the ONSD. Based on in vivo data, the relationship between ICP and ONSD appears to be linear only over a range of moderately elevated ICP. According to Hansen and Helmke,12 this range starts at approximately 18 to 30 mm Hg, and ends at approximately 40 to 45 mm Hg. Maissan et al13 observed similar findings: “At low levels, ICP changes (8-10 mm Hg) do not affect the ONSD.”
There is still need for additional research to validate and refine these findings. Only one study has compared gold-standard ICP measurements with ONSD US measurements in real time,13 and the literature on ONSD US in tracking ICP after therapeutic LP in IIH consists of only two case reports.
Thus, with some caveats, ONSD US appears to permit qualitative tracking of ICP in real time. This supports its use in situations where a patient may have rapidly changing ICP, such as close monitoring of patients at risk for elevated ICP in a critical care setting, and response to treatment in patients with IIH.
1. Stocchetti N, Maas AI. Traumatic intracranial hypertension. N Engl J Med. 2014;370(22):2121-2130.
2. Brain Trauma Foundation; American Association of Neurological Surgeons; Congress of Neurological Surgeons; et al. Guidelines for the management of severe traumatic brain injury. VI. Indications for intracranial pressure monitoring. J Neurotrauma. 2007;24(Suppl 1):S37-S44.
3. Choi SH, Min KT, Park EK, Kim MS, Jung JH, Kim H. Ultrasonography of the optic nerve sheath to assess intracranial pressure changes after ventriculo-peritoneal shunt surgery in children with hydrocephalus: a prospective observational study. Anaesthesia. 2015;70(11):1268-1273.
4. Kristiansson H, Nissborg E, Bartek J Jr, Andresen M, Reinstrup P, Romner B. Measuring elevated intracranial pressure through noninvasive methods: a review of the literature. J Neurosurg Anesthesiol. 2013;25(4):372-385.
5. Rajajee V, Thyagarajan P, Rajagopalan RE. Optic nerve ultrasonography for detection of raised intracranial pressure when invasive monitoring is unavailable. Neurol India. 2010;58(5):812-813.
6. Robba C, Baciqaluppi S, Cardim D, Donnelly J, Bertuccio A, Czosnyka M. Non-invasive assessment of intracranial pressure. Acta Neurol Scand. 2016;134(1):4-21.
7. Hansen HC, Helmke K. The subarachnoid space surrounding the optic nerves. An ultrasound study of the optic nerve sheath. Surg Radiol Anat. 1996;18(4):323-328.
8. Hansen HC, Lagrèze W, Krueger O, Helmke K. Dependence of the optic nerve sheath diameter on acutely applied subarachnoidal pressure - an experimental ultrasound study. Acta Ophthalmol. 2011;89(6):e528-e532.
9. Dubourg J, Javouhey E, Geeraerts T, Messerer M, Kassai B. Ultrasonography of optic nerve sheath diameter for detection of raised intracranial pressure: a systematic review and meta-analysis. Intensive Care Med. 2011;37(7):1059-1068.
10. Kimberly HH, Shah S, Marill K, Noble V. Correlation of optic nerve sheath diameter with direct measurement of intracranial pressure. Acad Emerg Med. 2008;15(2):201-204.
11. Tamburrelli C, Anile C, Mangiola A, Falsini B, Palma P. CSF dynamic parameters and changes of optic nerve diameters measured by standardized echography. In: Till P, ed. Ophthalmic Echography 13: Proceedings of the 13th SIDUO Congress, Vienna, Austria, 1990; vol 55. Dordrecht, Netherlands: Springer Netherlands; 1993:101-109.
12. Hansen HC, Helmke K. Validation of the optic nerve sheath response to changing cerebrospinal fluid pressure: ultrasound findings during intrathecal infusion tests. J Neurosurg. 1997;87(1):34-40.
13. Maissan IM, Dirven PJ, Haitsma IK, Hoeks SE, Gommers D, Stolker RJ. Ultrasonographic measured optic nerve sheath diameter as an accurate and quick monitor for changes in intracranial pressure. J Neurosurg. 2015;123(3)743-747.
14. Galetta S, Byrne SF, Smith JL. Echographic correlation of optic nerve sheath size and cerebrospinal fluid pressure. J Clin Neuroophthalmol. 1989;9(2):79-82.
15. Singleton J, Dagan A, Edlow JA, Hoffmann B. Real-time optic nerve sheath diameter reduction measured with bedside ultrasound after therapeutic lumbar puncture in a patient with idiopathic intracranial hypertension. Am J Emerg Med. 2015;33(6):860.e5-e7.
16. Hansen HC, Lagrèze W, Krueger O, Helmke K. Dependence of the optic nerve sheath diameter on acutely applied subarachnoidal pressure—an experimental ultrasound study. Acta Ophthalmol. 2011;89(6):e528-e532.
1. Stocchetti N, Maas AI. Traumatic intracranial hypertension. N Engl J Med. 2014;370(22):2121-2130.
2. Brain Trauma Foundation; American Association of Neurological Surgeons; Congress of Neurological Surgeons; et al. Guidelines for the management of severe traumatic brain injury. VI. Indications for intracranial pressure monitoring. J Neurotrauma. 2007;24(Suppl 1):S37-S44.
3. Choi SH, Min KT, Park EK, Kim MS, Jung JH, Kim H. Ultrasonography of the optic nerve sheath to assess intracranial pressure changes after ventriculo-peritoneal shunt surgery in children with hydrocephalus: a prospective observational study. Anaesthesia. 2015;70(11):1268-1273.
4. Kristiansson H, Nissborg E, Bartek J Jr, Andresen M, Reinstrup P, Romner B. Measuring elevated intracranial pressure through noninvasive methods: a review of the literature. J Neurosurg Anesthesiol. 2013;25(4):372-385.
5. Rajajee V, Thyagarajan P, Rajagopalan RE. Optic nerve ultrasonography for detection of raised intracranial pressure when invasive monitoring is unavailable. Neurol India. 2010;58(5):812-813.
6. Robba C, Baciqaluppi S, Cardim D, Donnelly J, Bertuccio A, Czosnyka M. Non-invasive assessment of intracranial pressure. Acta Neurol Scand. 2016;134(1):4-21.
7. Hansen HC, Helmke K. The subarachnoid space surrounding the optic nerves. An ultrasound study of the optic nerve sheath. Surg Radiol Anat. 1996;18(4):323-328.
8. Hansen HC, Lagrèze W, Krueger O, Helmke K. Dependence of the optic nerve sheath diameter on acutely applied subarachnoidal pressure - an experimental ultrasound study. Acta Ophthalmol. 2011;89(6):e528-e532.
9. Dubourg J, Javouhey E, Geeraerts T, Messerer M, Kassai B. Ultrasonography of optic nerve sheath diameter for detection of raised intracranial pressure: a systematic review and meta-analysis. Intensive Care Med. 2011;37(7):1059-1068.
10. Kimberly HH, Shah S, Marill K, Noble V. Correlation of optic nerve sheath diameter with direct measurement of intracranial pressure. Acad Emerg Med. 2008;15(2):201-204.
11. Tamburrelli C, Anile C, Mangiola A, Falsini B, Palma P. CSF dynamic parameters and changes of optic nerve diameters measured by standardized echography. In: Till P, ed. Ophthalmic Echography 13: Proceedings of the 13th SIDUO Congress, Vienna, Austria, 1990; vol 55. Dordrecht, Netherlands: Springer Netherlands; 1993:101-109.
12. Hansen HC, Helmke K. Validation of the optic nerve sheath response to changing cerebrospinal fluid pressure: ultrasound findings during intrathecal infusion tests. J Neurosurg. 1997;87(1):34-40.
13. Maissan IM, Dirven PJ, Haitsma IK, Hoeks SE, Gommers D, Stolker RJ. Ultrasonographic measured optic nerve sheath diameter as an accurate and quick monitor for changes in intracranial pressure. J Neurosurg. 2015;123(3)743-747.
14. Galetta S, Byrne SF, Smith JL. Echographic correlation of optic nerve sheath size and cerebrospinal fluid pressure. J Clin Neuroophthalmol. 1989;9(2):79-82.
15. Singleton J, Dagan A, Edlow JA, Hoffmann B. Real-time optic nerve sheath diameter reduction measured with bedside ultrasound after therapeutic lumbar puncture in a patient with idiopathic intracranial hypertension. Am J Emerg Med. 2015;33(6):860.e5-e7.
16. Hansen HC, Lagrèze W, Krueger O, Helmke K. Dependence of the optic nerve sheath diameter on acutely applied subarachnoidal pressure—an experimental ultrasound study. Acta Ophthalmol. 2011;89(6):e528-e532.