User login
Idelalisib more effective in CLL, iNHL than MCL
Credit: FDA
Results of a phase 1 study suggest the PI3K delta inhibitor idelalisib can produce durable responses in certain patients with relapsed or refractory disease.
The drug elicited a response rate of 72% in patients with chronic lymphocytic leukemia (CLL), 47% in indolent non-Hodgkin lymphoma (iNHL), and 40% in mantle cell lymphoma(MCL).
The median duration of response was 16.2 months among CLL patients, 18.4 months among iNHL patients, and 2.7 months among those with MCL.
“Considering the high number of previous therapies that these patients had received, higher than we sometimes see in comparable studies, the efficacy of idelalisib that we observed was remarkable,” said study author Ian Flinn, MD, PhD, of the Sarah Cannon Research Institute in Nashville, Tennessee.
In 3 papers published in Blood, Dr Flinn and his colleagues presented data from this phase 1 study of idelalisib. After an initial study involving all trial participants, the patients were separated into CLL, iNHL, and MCL disease cohorts.
Solid survival rates in CLL
The researchers evaluated idelalisib in 54 patients with relapsed or refractory CLL. The patients had received a median of 5 prior treatments (range, 2-14).
They had a median age of 63 years (range 37-82), 80% had bulky lymphadenopathy, 70% had treatment-refractory disease, 91% had unmutated IGHV, and 24% had del17p and/or TP53 mutation.
In the primary study, the patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily for 48 weeks. If they continued to derive clinical benefit, patients could continue treatment on an extension study.
Fifty-four percent of patients discontinued treatment during the primary study period. Twenty-eight percent stopped because of disease progression, 9% due to adverse events (AEs), and 6% due to early deaths resulting from AEs.
Grade 3 or higher AEs included pneumonia (20%), neutropenic fever (11%), diarrhea (6%), pyrexia (4%), cough (4%), and fatigue (2%). Common grade 3 or higher lab abnormalities included neutropenia (43%), anemia (11%), and thrombocytopenia (17%).
The overall response rate was 72%, with 39% of patients meeting the criteria for partial response per IWCLL 2008 criteria and 33% meeting the criteria of partial response in the presence of treatment-induced lymphocytosis.
The median duration of response was 16.2 months, the median progression-free survival (PFS) was 15.8 months, and the median overall survival was not reached.
Longer response duration in iNHL
The researchers evaluated idelalisib in 64 patients with iNHL. Lymphoma types included follicular lymphoma (59%), small lymphocytic lymphoma (17%), marginal zone lymphoma (9%), and lymphoplasmacytic lymphoma (14%).
Patients had a median age of 64 years (range, 32-91), 53% had bulky disease, and 58% had refractory disease. They had received a median of 4 prior therapies (range, 1-10).
The patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily. After 48 weeks, patients still benefitting from treatment (30%) were enrolled in an extension study.
The remaining 70% of patients discontinued treatment during the primary study. Nineteen percent of these patients discontinued due to AEs.
Grade 3 or higher AEs included pneumonia (17%), diarrhea (9%), peripheral edema (3%), fatigue (3%), rash (3%), pyrexia (3%), nausea (2%), and cough (2%). Grade 3 or higher lab abnormalities included AST elevation (20%), ALT elevation (23%), neutropenia (23%), thrombocytopenia (11%), and anemia (5%).
The overall response rate was 47%, with 1 patient (1.6%) achieving a complete response. The median duration of response was 18.4 months, and the median PFS was 7.6 months.
Short response, survival duration in MCL
The researchers evaluated idelalisib in 40 patients with relapsed or refractory MCL. The median age was 69 years (range, 52-83). Patients had received a median of 4 prior therapies (range, 1-14), and 43% were refractory to their most recent treatment.
Patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily for a median of 3.5 months (range, 0.7-30.7). Six patients (15%) continued treatment for more than 48 weeks, although only 1 patient remains on treatment at present.
The 34 patients who discontinued the primary study did so because of progressive disease (60%), AEs (20%), withdrawn consent (3%), or investigator request (3%). Of the 6 patients who entered the extension trial, 4 ultimately discontinued due to progressive disease and 1 due to AEs.
Grade 3 or higher AEs included diarrhea (18%), decreased appetite (15%), pneumonia (10%), nausea (5%), fatigue (3%), and rash (3%). Grade 3 or higher lab abnormalities included ALT/AST elevations (20%), neutropenia (10%), thrombocytopenia (5%), and anemia (3%).
The overall response rate was 40%, with 5% of patients achieving a complete response. The median duration of response was 2.7 months, and the median PFS was 3.7 months.
Despite the modest duration of survival observed in these patients, the researchers believe the strong initial response to idelalisib suggests the drug could still prove useful in patients with MCL.
“[I]delalisib is unlikely to receive designation as a single-agent therapy in mantle cell lymphoma due to the short duration of response,” said study author Brad S. Kahl, MD, of the University of Wisconsin Carbone Cancer Center in Madison.
“The path forward will likely include administering it in combination with other agents or developing second-generation PI3 kinase inhibitors.”
Credit: FDA
Results of a phase 1 study suggest the PI3K delta inhibitor idelalisib can produce durable responses in certain patients with relapsed or refractory disease.
The drug elicited a response rate of 72% in patients with chronic lymphocytic leukemia (CLL), 47% in indolent non-Hodgkin lymphoma (iNHL), and 40% in mantle cell lymphoma(MCL).
The median duration of response was 16.2 months among CLL patients, 18.4 months among iNHL patients, and 2.7 months among those with MCL.
“Considering the high number of previous therapies that these patients had received, higher than we sometimes see in comparable studies, the efficacy of idelalisib that we observed was remarkable,” said study author Ian Flinn, MD, PhD, of the Sarah Cannon Research Institute in Nashville, Tennessee.
In 3 papers published in Blood, Dr Flinn and his colleagues presented data from this phase 1 study of idelalisib. After an initial study involving all trial participants, the patients were separated into CLL, iNHL, and MCL disease cohorts.
Solid survival rates in CLL
The researchers evaluated idelalisib in 54 patients with relapsed or refractory CLL. The patients had received a median of 5 prior treatments (range, 2-14).
They had a median age of 63 years (range 37-82), 80% had bulky lymphadenopathy, 70% had treatment-refractory disease, 91% had unmutated IGHV, and 24% had del17p and/or TP53 mutation.
In the primary study, the patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily for 48 weeks. If they continued to derive clinical benefit, patients could continue treatment on an extension study.
Fifty-four percent of patients discontinued treatment during the primary study period. Twenty-eight percent stopped because of disease progression, 9% due to adverse events (AEs), and 6% due to early deaths resulting from AEs.
Grade 3 or higher AEs included pneumonia (20%), neutropenic fever (11%), diarrhea (6%), pyrexia (4%), cough (4%), and fatigue (2%). Common grade 3 or higher lab abnormalities included neutropenia (43%), anemia (11%), and thrombocytopenia (17%).
The overall response rate was 72%, with 39% of patients meeting the criteria for partial response per IWCLL 2008 criteria and 33% meeting the criteria of partial response in the presence of treatment-induced lymphocytosis.
The median duration of response was 16.2 months, the median progression-free survival (PFS) was 15.8 months, and the median overall survival was not reached.
Longer response duration in iNHL
The researchers evaluated idelalisib in 64 patients with iNHL. Lymphoma types included follicular lymphoma (59%), small lymphocytic lymphoma (17%), marginal zone lymphoma (9%), and lymphoplasmacytic lymphoma (14%).
Patients had a median age of 64 years (range, 32-91), 53% had bulky disease, and 58% had refractory disease. They had received a median of 4 prior therapies (range, 1-10).
The patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily. After 48 weeks, patients still benefitting from treatment (30%) were enrolled in an extension study.
The remaining 70% of patients discontinued treatment during the primary study. Nineteen percent of these patients discontinued due to AEs.
Grade 3 or higher AEs included pneumonia (17%), diarrhea (9%), peripheral edema (3%), fatigue (3%), rash (3%), pyrexia (3%), nausea (2%), and cough (2%). Grade 3 or higher lab abnormalities included AST elevation (20%), ALT elevation (23%), neutropenia (23%), thrombocytopenia (11%), and anemia (5%).
The overall response rate was 47%, with 1 patient (1.6%) achieving a complete response. The median duration of response was 18.4 months, and the median PFS was 7.6 months.
Short response, survival duration in MCL
The researchers evaluated idelalisib in 40 patients with relapsed or refractory MCL. The median age was 69 years (range, 52-83). Patients had received a median of 4 prior therapies (range, 1-14), and 43% were refractory to their most recent treatment.
Patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily for a median of 3.5 months (range, 0.7-30.7). Six patients (15%) continued treatment for more than 48 weeks, although only 1 patient remains on treatment at present.
The 34 patients who discontinued the primary study did so because of progressive disease (60%), AEs (20%), withdrawn consent (3%), or investigator request (3%). Of the 6 patients who entered the extension trial, 4 ultimately discontinued due to progressive disease and 1 due to AEs.
Grade 3 or higher AEs included diarrhea (18%), decreased appetite (15%), pneumonia (10%), nausea (5%), fatigue (3%), and rash (3%). Grade 3 or higher lab abnormalities included ALT/AST elevations (20%), neutropenia (10%), thrombocytopenia (5%), and anemia (3%).
The overall response rate was 40%, with 5% of patients achieving a complete response. The median duration of response was 2.7 months, and the median PFS was 3.7 months.
Despite the modest duration of survival observed in these patients, the researchers believe the strong initial response to idelalisib suggests the drug could still prove useful in patients with MCL.
“[I]delalisib is unlikely to receive designation as a single-agent therapy in mantle cell lymphoma due to the short duration of response,” said study author Brad S. Kahl, MD, of the University of Wisconsin Carbone Cancer Center in Madison.
“The path forward will likely include administering it in combination with other agents or developing second-generation PI3 kinase inhibitors.”
Credit: FDA
Results of a phase 1 study suggest the PI3K delta inhibitor idelalisib can produce durable responses in certain patients with relapsed or refractory disease.
The drug elicited a response rate of 72% in patients with chronic lymphocytic leukemia (CLL), 47% in indolent non-Hodgkin lymphoma (iNHL), and 40% in mantle cell lymphoma(MCL).
The median duration of response was 16.2 months among CLL patients, 18.4 months among iNHL patients, and 2.7 months among those with MCL.
“Considering the high number of previous therapies that these patients had received, higher than we sometimes see in comparable studies, the efficacy of idelalisib that we observed was remarkable,” said study author Ian Flinn, MD, PhD, of the Sarah Cannon Research Institute in Nashville, Tennessee.
In 3 papers published in Blood, Dr Flinn and his colleagues presented data from this phase 1 study of idelalisib. After an initial study involving all trial participants, the patients were separated into CLL, iNHL, and MCL disease cohorts.
Solid survival rates in CLL
The researchers evaluated idelalisib in 54 patients with relapsed or refractory CLL. The patients had received a median of 5 prior treatments (range, 2-14).
They had a median age of 63 years (range 37-82), 80% had bulky lymphadenopathy, 70% had treatment-refractory disease, 91% had unmutated IGHV, and 24% had del17p and/or TP53 mutation.
In the primary study, the patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily for 48 weeks. If they continued to derive clinical benefit, patients could continue treatment on an extension study.
Fifty-four percent of patients discontinued treatment during the primary study period. Twenty-eight percent stopped because of disease progression, 9% due to adverse events (AEs), and 6% due to early deaths resulting from AEs.
Grade 3 or higher AEs included pneumonia (20%), neutropenic fever (11%), diarrhea (6%), pyrexia (4%), cough (4%), and fatigue (2%). Common grade 3 or higher lab abnormalities included neutropenia (43%), anemia (11%), and thrombocytopenia (17%).
The overall response rate was 72%, with 39% of patients meeting the criteria for partial response per IWCLL 2008 criteria and 33% meeting the criteria of partial response in the presence of treatment-induced lymphocytosis.
The median duration of response was 16.2 months, the median progression-free survival (PFS) was 15.8 months, and the median overall survival was not reached.
Longer response duration in iNHL
The researchers evaluated idelalisib in 64 patients with iNHL. Lymphoma types included follicular lymphoma (59%), small lymphocytic lymphoma (17%), marginal zone lymphoma (9%), and lymphoplasmacytic lymphoma (14%).
Patients had a median age of 64 years (range, 32-91), 53% had bulky disease, and 58% had refractory disease. They had received a median of 4 prior therapies (range, 1-10).
The patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily. After 48 weeks, patients still benefitting from treatment (30%) were enrolled in an extension study.
The remaining 70% of patients discontinued treatment during the primary study. Nineteen percent of these patients discontinued due to AEs.
Grade 3 or higher AEs included pneumonia (17%), diarrhea (9%), peripheral edema (3%), fatigue (3%), rash (3%), pyrexia (3%), nausea (2%), and cough (2%). Grade 3 or higher lab abnormalities included AST elevation (20%), ALT elevation (23%), neutropenia (23%), thrombocytopenia (11%), and anemia (5%).
The overall response rate was 47%, with 1 patient (1.6%) achieving a complete response. The median duration of response was 18.4 months, and the median PFS was 7.6 months.
Short response, survival duration in MCL
The researchers evaluated idelalisib in 40 patients with relapsed or refractory MCL. The median age was 69 years (range, 52-83). Patients had received a median of 4 prior therapies (range, 1-14), and 43% were refractory to their most recent treatment.
Patients received idelalisib at doses ranging from 50 mg to 350 mg once or twice daily for a median of 3.5 months (range, 0.7-30.7). Six patients (15%) continued treatment for more than 48 weeks, although only 1 patient remains on treatment at present.
The 34 patients who discontinued the primary study did so because of progressive disease (60%), AEs (20%), withdrawn consent (3%), or investigator request (3%). Of the 6 patients who entered the extension trial, 4 ultimately discontinued due to progressive disease and 1 due to AEs.
Grade 3 or higher AEs included diarrhea (18%), decreased appetite (15%), pneumonia (10%), nausea (5%), fatigue (3%), and rash (3%). Grade 3 or higher lab abnormalities included ALT/AST elevations (20%), neutropenia (10%), thrombocytopenia (5%), and anemia (3%).
The overall response rate was 40%, with 5% of patients achieving a complete response. The median duration of response was 2.7 months, and the median PFS was 3.7 months.
Despite the modest duration of survival observed in these patients, the researchers believe the strong initial response to idelalisib suggests the drug could still prove useful in patients with MCL.
“[I]delalisib is unlikely to receive designation as a single-agent therapy in mantle cell lymphoma due to the short duration of response,” said study author Brad S. Kahl, MD, of the University of Wisconsin Carbone Cancer Center in Madison.
“The path forward will likely include administering it in combination with other agents or developing second-generation PI3 kinase inhibitors.”
Titanium dioxide
Titanium dioxide (TiO2) and zinc oxide (ZnO) in large-particle form have long been used in various sunscreens to protect the skin by reflecting or physically blocking ultraviolet (UV) radiation. In recent years, TiO2 as well as ZnO nanoparticles have been incorporated into sunscreens and cosmetics to act as a UV shield. They have been shown to be effective barriers against UV-induced damage, and yield stronger protection against UV insult, while leaving less white residue, than previous generations of the physical sunblocks.
However, some data suggest that in nanoparticle form, TiO2 and ZnO absorb UV radiation, leading to photocatalysis and the release of reactive oxygen species (Australas. J. Dermatol. 2011;52:1-6). This column will focus primarily on the safety of TiO2 in nanoparticle form.
While numerous studies examine both TiO2 and ZnO, the primary inorganic sunscreens, the sheer number of separate investigations warrants individual articles, and ZnO was addressed in previous columns. Briefly, though, TiO2 is more photoactive and exhibits a higher refractive index in visible light than ZnO (J. Am. Acad. Dermatol. 1999;40:85-90); therefore, TiO2 appears whiter and is more difficult to incorporate into transparent products.
A 2011 study by Kang et al. showed that TiO2 nanoparticles, but not normal-sized TiO2, and UVA synergistically foster rapid production of reactive oxygen species and breakdown of mitochondrial membrane potential, leading to apoptosis, and that TiO2 nanoparticles are more phototoxic than larger ones (Drug Chem. Toxicol. 2011;34:277-84).
However, also in 2011, Tyner et al. investigated the effects of nanoscale TiO2 use on UV attenuation in simple to complex sunscreen products. They found that barrier function was diminished by none of the formulations, and that optimal UV attenuation resulted when TiO2 particles were stabilized with a coating and evenly dispersed. The researchers concluded that nanoscale TiO2 is nontoxic and may impart greater efficacy (Int. J. Cosmet. Sci. 2011;33:234-44).
In vitro and in vivo studies
In 2010, Tiano et al. evaluated five modified TiO2 particles, developed and marketed for sunscreens. They used different in vitro models, including cultured human skin fibroblasts, to determine potential photocatalytic effects after UVA exposure. The investigators found that the kind of modification to and crystal form of the TiO2 nanoparticle influences its ability to augment or reduce DNA damage, increase or decrease intracellular reactive oxygen species, diminish cell viability, and promote other effects of photocatalysis. In particular, they noted that the anatase crystal form of TiO2 retained photocatalytic activity. The authors suggested that while the debate continues over the penetration of nanosized TiO2 into the viable epidermis, their results help elucidate the potential effects of TiO2 particles at the cellular level (Free Radic. Biol. Med. 2010;49:408-15).
A 2010 study by Senzui et al. using in vitro intact, stripped, and hair-removed skin of Yucatan micropigs to test the skin penetration of four different types of rutile (the most natural form of) TiO2 (two coated, two uncoated) revealed no penetration of TiO2 type in intact and stripped skin. The concentration of titanium in skin was significantly higher when one of the coated forms was applied on hair-removed skin, with titanium penetrating into vacant hair follicles (greater than 1 mm below the skin surface), but not into dermis or viable epidermis (J. Toxicol. Sci. 2010;35:107-13).
Animal studies
In 2009, the Food and Drug Administration Center for Drug Evaluation and Research worked with the National Center for Toxicology Research using minipigs and four sunscreen formulations to determine whether nanoscale TiO2 can penetrate intact skin. Their use of scanning electron microscopy and x-ray diffraction revealed that TiO2 particles were the same size as that observed for the raw materials, implying that the formulation process influenced neither the size nor the shape of TiO2 particles (Drug Dev. Ind. Pharm. 2009;35:1180-9).
In 2010, Sadrieh et al. performed a study of the dermal penetration of three TiO2 particles: uncoated submicrometer-sized, uncoated nano-sized, and dimethicone/methicone copolymer-coated nanosized. The investigators applied 5% by weight of each of the types of particles in a sunscreen on minipigs and found no significant penetration into intact normal epidermis (Toxicol. Sci. 2010;115(1):156-66).
In 2011, Furukawa et al. studied the postinitiation carcinogenic potential of coated and uncoated TiO2 nanoparticles in a two-stage skin carcinogenesis model using 7-week-old CD1 (ICR) female mice. They found that application of coated and uncoated nanoparticles after initiation and promotion with 7,12-dimethylbenz[a]anthracene and 12-O-tetradecanoylphorbol 13-acetate at doses of up to 20 mg/mouse failed to augment nodule development. The investigators concluded that TiO2 nanoparticles do not exhibit postinitiation potential for mouse skin carcinogenesis (Food Chem. Toxicol. 2011;49(4):744-9).
Human data
Given the persistent concerns about possible side effects of coated TiO2 and ZnO nanoparticles used in physical sun blockers, Filipe et al., in 2009, assessed the localization and potential skin penetration of TiO2 and ZnO nanoparticles dispersed in three sunscreen formulations, under realistic in vivo conditions in normal and altered skin. The investigators examined a test hydrophobic formulation containing coated 20-nm TiO2 nanoparticles and two commercially available sunscreen formulations containing TiO2 alone or in combination with ZnO, with respect to how consumers actually used sunscreens compared with the recommended standard condition for the sun protection factor test. They found that traces of the physical blockers could be detected only at the skin surface and uppermost area of the stratum corneum in normal human skin after a 2-hour exposure. After 48 hours of exposure, layers deeper than the stratum corneum contained no detectable TiO2 or ZnO nanoparticles. While preferential deposition of the nanoparticles in the openings of pilosebaceous follicles was noted, no penetration into viable skin tissue was observed. The investigators concluded that significant penetration of TiO2 or ZnO nanoparticles into keratinocytes is improbable (Skin Pharmacol. Physiol. 2009;22:266-75).
The weight of evidence
Current evidence suggests minimal risks to human health from the use of TiO2 or ZnO nanoparticles at concentrations up to 25% in cosmetic preparations or sunscreens, according to Schilling et al., regardless of coatings or crystalline structure. In a safety review of these ingredients, they noted that these nanoparticles formulated in topical products occur as aggregates of primary particles 30-150 nm in size, and bond in such a way that renders them impervious to the force of product application. Thus their structure remains unaffected, and no primary particles are released. The authors also noted that nanoparticles exhibit equivalence with larger particles in terms of distribution and duration and, therefore, recognition and elimination from the body (Photochem. Photobiol. Sci. 2010;9:495-509).
But in 2011, Tran and Salmon, in light of findings that nanoparticles may penetrate the stratum corneum under certain conditions, considered the possible photocarcinogenic results of nanoparticle sunscreens. They noted, though, that most such results were obtained through the use of animal skin models, not investigations with human skin (Australas. J. Dermatol. 2011;52:1-6). To this point, the weight of evidence appears to show that such TiO2 nanoparticles are safe when applied to intact human skin (Semin. Cutan. Med. Surg. 2011;30:210-13).
In response to the increased scrutiny and concern exhibited by the general public and government agencies regarding the safety of TiO2 and ZnO nanoparticles, Newman et al. reviewed the literature and position statements from 1980 to 2008 to ascertain and describe the use, safety, and regulatory state of such ingredients in sunscreens. They found no evidence of significant penetration deeper than the stratum corneum of TiO2 and ZnO nanoparticles, but caution that additional studies simulating real-world conditions (i.e., sunburned skin and under UV exposure) are necessary (J. Am. Acad. Dermatol. 2009;61:685-92).
Conclusion
Titanium dioxide is a well-established, safe, and effective physical sunblock. Nanotechnology has introduced some cause for concern regarding its use in physical sunblocks. In particular, evidence suggesting that photoexcitation of TiO2 nanoparticles leads to the generation of reactive oxygen species that damage DNA, potentially launching a cascade of adverse events, has prompted investigations into the safety of TiO2 in nanoparticle form. However, to date, multiple studies suggest that TiO2 nanoparticles do not penetrate or are highly unlikely to penetrate beyond the stratum corneum.
Dr. Baumann is chief executive officer of the Baumann Cosmetic & Research Institute in Miami Beach. She founded the cosmetic dermatology center at the University of Miami in 1997. Dr. Baumann wrote the textbook "Cosmetic Dermatology: Principles and Practice" (McGraw-Hill, 2002), and a book for consumers, "The Skin Type Solution" (Bantam, 2006). Dr. Baumann has received funding for clinical grants from Allergan, Aveeno, Avon Products, Galderma, Mary Kay, Medicis Pharmaceuticals, Neutrogena, Philosophy, Stiefel, Topix Pharmaceuticals, and Unilever.
Titanium dioxide (TiO2) and zinc oxide (ZnO) in large-particle form have long been used in various sunscreens to protect the skin by reflecting or physically blocking ultraviolet (UV) radiation. In recent years, TiO2 as well as ZnO nanoparticles have been incorporated into sunscreens and cosmetics to act as a UV shield. They have been shown to be effective barriers against UV-induced damage, and yield stronger protection against UV insult, while leaving less white residue, than previous generations of the physical sunblocks.
However, some data suggest that in nanoparticle form, TiO2 and ZnO absorb UV radiation, leading to photocatalysis and the release of reactive oxygen species (Australas. J. Dermatol. 2011;52:1-6). This column will focus primarily on the safety of TiO2 in nanoparticle form.
While numerous studies examine both TiO2 and ZnO, the primary inorganic sunscreens, the sheer number of separate investigations warrants individual articles, and ZnO was addressed in previous columns. Briefly, though, TiO2 is more photoactive and exhibits a higher refractive index in visible light than ZnO (J. Am. Acad. Dermatol. 1999;40:85-90); therefore, TiO2 appears whiter and is more difficult to incorporate into transparent products.
A 2011 study by Kang et al. showed that TiO2 nanoparticles, but not normal-sized TiO2, and UVA synergistically foster rapid production of reactive oxygen species and breakdown of mitochondrial membrane potential, leading to apoptosis, and that TiO2 nanoparticles are more phototoxic than larger ones (Drug Chem. Toxicol. 2011;34:277-84).
However, also in 2011, Tyner et al. investigated the effects of nanoscale TiO2 use on UV attenuation in simple to complex sunscreen products. They found that barrier function was diminished by none of the formulations, and that optimal UV attenuation resulted when TiO2 particles were stabilized with a coating and evenly dispersed. The researchers concluded that nanoscale TiO2 is nontoxic and may impart greater efficacy (Int. J. Cosmet. Sci. 2011;33:234-44).
In vitro and in vivo studies
In 2010, Tiano et al. evaluated five modified TiO2 particles, developed and marketed for sunscreens. They used different in vitro models, including cultured human skin fibroblasts, to determine potential photocatalytic effects after UVA exposure. The investigators found that the kind of modification to and crystal form of the TiO2 nanoparticle influences its ability to augment or reduce DNA damage, increase or decrease intracellular reactive oxygen species, diminish cell viability, and promote other effects of photocatalysis. In particular, they noted that the anatase crystal form of TiO2 retained photocatalytic activity. The authors suggested that while the debate continues over the penetration of nanosized TiO2 into the viable epidermis, their results help elucidate the potential effects of TiO2 particles at the cellular level (Free Radic. Biol. Med. 2010;49:408-15).
A 2010 study by Senzui et al. using in vitro intact, stripped, and hair-removed skin of Yucatan micropigs to test the skin penetration of four different types of rutile (the most natural form of) TiO2 (two coated, two uncoated) revealed no penetration of TiO2 type in intact and stripped skin. The concentration of titanium in skin was significantly higher when one of the coated forms was applied on hair-removed skin, with titanium penetrating into vacant hair follicles (greater than 1 mm below the skin surface), but not into dermis or viable epidermis (J. Toxicol. Sci. 2010;35:107-13).
Animal studies
In 2009, the Food and Drug Administration Center for Drug Evaluation and Research worked with the National Center for Toxicology Research using minipigs and four sunscreen formulations to determine whether nanoscale TiO2 can penetrate intact skin. Their use of scanning electron microscopy and x-ray diffraction revealed that TiO2 particles were the same size as that observed for the raw materials, implying that the formulation process influenced neither the size nor the shape of TiO2 particles (Drug Dev. Ind. Pharm. 2009;35:1180-9).
In 2010, Sadrieh et al. performed a study of the dermal penetration of three TiO2 particles: uncoated submicrometer-sized, uncoated nano-sized, and dimethicone/methicone copolymer-coated nanosized. The investigators applied 5% by weight of each of the types of particles in a sunscreen on minipigs and found no significant penetration into intact normal epidermis (Toxicol. Sci. 2010;115(1):156-66).
In 2011, Furukawa et al. studied the postinitiation carcinogenic potential of coated and uncoated TiO2 nanoparticles in a two-stage skin carcinogenesis model using 7-week-old CD1 (ICR) female mice. They found that application of coated and uncoated nanoparticles after initiation and promotion with 7,12-dimethylbenz[a]anthracene and 12-O-tetradecanoylphorbol 13-acetate at doses of up to 20 mg/mouse failed to augment nodule development. The investigators concluded that TiO2 nanoparticles do not exhibit postinitiation potential for mouse skin carcinogenesis (Food Chem. Toxicol. 2011;49(4):744-9).
Human data
Given the persistent concerns about possible side effects of coated TiO2 and ZnO nanoparticles used in physical sun blockers, Filipe et al., in 2009, assessed the localization and potential skin penetration of TiO2 and ZnO nanoparticles dispersed in three sunscreen formulations, under realistic in vivo conditions in normal and altered skin. The investigators examined a test hydrophobic formulation containing coated 20-nm TiO2 nanoparticles and two commercially available sunscreen formulations containing TiO2 alone or in combination with ZnO, with respect to how consumers actually used sunscreens compared with the recommended standard condition for the sun protection factor test. They found that traces of the physical blockers could be detected only at the skin surface and uppermost area of the stratum corneum in normal human skin after a 2-hour exposure. After 48 hours of exposure, layers deeper than the stratum corneum contained no detectable TiO2 or ZnO nanoparticles. While preferential deposition of the nanoparticles in the openings of pilosebaceous follicles was noted, no penetration into viable skin tissue was observed. The investigators concluded that significant penetration of TiO2 or ZnO nanoparticles into keratinocytes is improbable (Skin Pharmacol. Physiol. 2009;22:266-75).
The weight of evidence
Current evidence suggests minimal risks to human health from the use of TiO2 or ZnO nanoparticles at concentrations up to 25% in cosmetic preparations or sunscreens, according to Schilling et al., regardless of coatings or crystalline structure. In a safety review of these ingredients, they noted that these nanoparticles formulated in topical products occur as aggregates of primary particles 30-150 nm in size, and bond in such a way that renders them impervious to the force of product application. Thus their structure remains unaffected, and no primary particles are released. The authors also noted that nanoparticles exhibit equivalence with larger particles in terms of distribution and duration and, therefore, recognition and elimination from the body (Photochem. Photobiol. Sci. 2010;9:495-509).
But in 2011, Tran and Salmon, in light of findings that nanoparticles may penetrate the stratum corneum under certain conditions, considered the possible photocarcinogenic results of nanoparticle sunscreens. They noted, though, that most such results were obtained through the use of animal skin models, not investigations with human skin (Australas. J. Dermatol. 2011;52:1-6). To this point, the weight of evidence appears to show that such TiO2 nanoparticles are safe when applied to intact human skin (Semin. Cutan. Med. Surg. 2011;30:210-13).
In response to the increased scrutiny and concern exhibited by the general public and government agencies regarding the safety of TiO2 and ZnO nanoparticles, Newman et al. reviewed the literature and position statements from 1980 to 2008 to ascertain and describe the use, safety, and regulatory state of such ingredients in sunscreens. They found no evidence of significant penetration deeper than the stratum corneum of TiO2 and ZnO nanoparticles, but caution that additional studies simulating real-world conditions (i.e., sunburned skin and under UV exposure) are necessary (J. Am. Acad. Dermatol. 2009;61:685-92).
Conclusion
Titanium dioxide is a well-established, safe, and effective physical sunblock. Nanotechnology has introduced some cause for concern regarding its use in physical sunblocks. In particular, evidence suggesting that photoexcitation of TiO2 nanoparticles leads to the generation of reactive oxygen species that damage DNA, potentially launching a cascade of adverse events, has prompted investigations into the safety of TiO2 in nanoparticle form. However, to date, multiple studies suggest that TiO2 nanoparticles do not penetrate or are highly unlikely to penetrate beyond the stratum corneum.
Dr. Baumann is chief executive officer of the Baumann Cosmetic & Research Institute in Miami Beach. She founded the cosmetic dermatology center at the University of Miami in 1997. Dr. Baumann wrote the textbook "Cosmetic Dermatology: Principles and Practice" (McGraw-Hill, 2002), and a book for consumers, "The Skin Type Solution" (Bantam, 2006). Dr. Baumann has received funding for clinical grants from Allergan, Aveeno, Avon Products, Galderma, Mary Kay, Medicis Pharmaceuticals, Neutrogena, Philosophy, Stiefel, Topix Pharmaceuticals, and Unilever.
Titanium dioxide (TiO2) and zinc oxide (ZnO) in large-particle form have long been used in various sunscreens to protect the skin by reflecting or physically blocking ultraviolet (UV) radiation. In recent years, TiO2 as well as ZnO nanoparticles have been incorporated into sunscreens and cosmetics to act as a UV shield. They have been shown to be effective barriers against UV-induced damage, and yield stronger protection against UV insult, while leaving less white residue, than previous generations of the physical sunblocks.
However, some data suggest that in nanoparticle form, TiO2 and ZnO absorb UV radiation, leading to photocatalysis and the release of reactive oxygen species (Australas. J. Dermatol. 2011;52:1-6). This column will focus primarily on the safety of TiO2 in nanoparticle form.
While numerous studies examine both TiO2 and ZnO, the primary inorganic sunscreens, the sheer number of separate investigations warrants individual articles, and ZnO was addressed in previous columns. Briefly, though, TiO2 is more photoactive and exhibits a higher refractive index in visible light than ZnO (J. Am. Acad. Dermatol. 1999;40:85-90); therefore, TiO2 appears whiter and is more difficult to incorporate into transparent products.
A 2011 study by Kang et al. showed that TiO2 nanoparticles, but not normal-sized TiO2, and UVA synergistically foster rapid production of reactive oxygen species and breakdown of mitochondrial membrane potential, leading to apoptosis, and that TiO2 nanoparticles are more phototoxic than larger ones (Drug Chem. Toxicol. 2011;34:277-84).
However, also in 2011, Tyner et al. investigated the effects of nanoscale TiO2 use on UV attenuation in simple to complex sunscreen products. They found that barrier function was diminished by none of the formulations, and that optimal UV attenuation resulted when TiO2 particles were stabilized with a coating and evenly dispersed. The researchers concluded that nanoscale TiO2 is nontoxic and may impart greater efficacy (Int. J. Cosmet. Sci. 2011;33:234-44).
In vitro and in vivo studies
In 2010, Tiano et al. evaluated five modified TiO2 particles, developed and marketed for sunscreens. They used different in vitro models, including cultured human skin fibroblasts, to determine potential photocatalytic effects after UVA exposure. The investigators found that the kind of modification to and crystal form of the TiO2 nanoparticle influences its ability to augment or reduce DNA damage, increase or decrease intracellular reactive oxygen species, diminish cell viability, and promote other effects of photocatalysis. In particular, they noted that the anatase crystal form of TiO2 retained photocatalytic activity. The authors suggested that while the debate continues over the penetration of nanosized TiO2 into the viable epidermis, their results help elucidate the potential effects of TiO2 particles at the cellular level (Free Radic. Biol. Med. 2010;49:408-15).
A 2010 study by Senzui et al. using in vitro intact, stripped, and hair-removed skin of Yucatan micropigs to test the skin penetration of four different types of rutile (the most natural form of) TiO2 (two coated, two uncoated) revealed no penetration of TiO2 type in intact and stripped skin. The concentration of titanium in skin was significantly higher when one of the coated forms was applied on hair-removed skin, with titanium penetrating into vacant hair follicles (greater than 1 mm below the skin surface), but not into dermis or viable epidermis (J. Toxicol. Sci. 2010;35:107-13).
Animal studies
In 2009, the Food and Drug Administration Center for Drug Evaluation and Research worked with the National Center for Toxicology Research using minipigs and four sunscreen formulations to determine whether nanoscale TiO2 can penetrate intact skin. Their use of scanning electron microscopy and x-ray diffraction revealed that TiO2 particles were the same size as that observed for the raw materials, implying that the formulation process influenced neither the size nor the shape of TiO2 particles (Drug Dev. Ind. Pharm. 2009;35:1180-9).
In 2010, Sadrieh et al. performed a study of the dermal penetration of three TiO2 particles: uncoated submicrometer-sized, uncoated nano-sized, and dimethicone/methicone copolymer-coated nanosized. The investigators applied 5% by weight of each of the types of particles in a sunscreen on minipigs and found no significant penetration into intact normal epidermis (Toxicol. Sci. 2010;115(1):156-66).
In 2011, Furukawa et al. studied the postinitiation carcinogenic potential of coated and uncoated TiO2 nanoparticles in a two-stage skin carcinogenesis model using 7-week-old CD1 (ICR) female mice. They found that application of coated and uncoated nanoparticles after initiation and promotion with 7,12-dimethylbenz[a]anthracene and 12-O-tetradecanoylphorbol 13-acetate at doses of up to 20 mg/mouse failed to augment nodule development. The investigators concluded that TiO2 nanoparticles do not exhibit postinitiation potential for mouse skin carcinogenesis (Food Chem. Toxicol. 2011;49(4):744-9).
Human data
Given the persistent concerns about possible side effects of coated TiO2 and ZnO nanoparticles used in physical sun blockers, Filipe et al., in 2009, assessed the localization and potential skin penetration of TiO2 and ZnO nanoparticles dispersed in three sunscreen formulations, under realistic in vivo conditions in normal and altered skin. The investigators examined a test hydrophobic formulation containing coated 20-nm TiO2 nanoparticles and two commercially available sunscreen formulations containing TiO2 alone or in combination with ZnO, with respect to how consumers actually used sunscreens compared with the recommended standard condition for the sun protection factor test. They found that traces of the physical blockers could be detected only at the skin surface and uppermost area of the stratum corneum in normal human skin after a 2-hour exposure. After 48 hours of exposure, layers deeper than the stratum corneum contained no detectable TiO2 or ZnO nanoparticles. While preferential deposition of the nanoparticles in the openings of pilosebaceous follicles was noted, no penetration into viable skin tissue was observed. The investigators concluded that significant penetration of TiO2 or ZnO nanoparticles into keratinocytes is improbable (Skin Pharmacol. Physiol. 2009;22:266-75).
The weight of evidence
Current evidence suggests minimal risks to human health from the use of TiO2 or ZnO nanoparticles at concentrations up to 25% in cosmetic preparations or sunscreens, according to Schilling et al., regardless of coatings or crystalline structure. In a safety review of these ingredients, they noted that these nanoparticles formulated in topical products occur as aggregates of primary particles 30-150 nm in size, and bond in such a way that renders them impervious to the force of product application. Thus their structure remains unaffected, and no primary particles are released. The authors also noted that nanoparticles exhibit equivalence with larger particles in terms of distribution and duration and, therefore, recognition and elimination from the body (Photochem. Photobiol. Sci. 2010;9:495-509).
But in 2011, Tran and Salmon, in light of findings that nanoparticles may penetrate the stratum corneum under certain conditions, considered the possible photocarcinogenic results of nanoparticle sunscreens. They noted, though, that most such results were obtained through the use of animal skin models, not investigations with human skin (Australas. J. Dermatol. 2011;52:1-6). To this point, the weight of evidence appears to show that such TiO2 nanoparticles are safe when applied to intact human skin (Semin. Cutan. Med. Surg. 2011;30:210-13).
In response to the increased scrutiny and concern exhibited by the general public and government agencies regarding the safety of TiO2 and ZnO nanoparticles, Newman et al. reviewed the literature and position statements from 1980 to 2008 to ascertain and describe the use, safety, and regulatory state of such ingredients in sunscreens. They found no evidence of significant penetration deeper than the stratum corneum of TiO2 and ZnO nanoparticles, but caution that additional studies simulating real-world conditions (i.e., sunburned skin and under UV exposure) are necessary (J. Am. Acad. Dermatol. 2009;61:685-92).
Conclusion
Titanium dioxide is a well-established, safe, and effective physical sunblock. Nanotechnology has introduced some cause for concern regarding its use in physical sunblocks. In particular, evidence suggesting that photoexcitation of TiO2 nanoparticles leads to the generation of reactive oxygen species that damage DNA, potentially launching a cascade of adverse events, has prompted investigations into the safety of TiO2 in nanoparticle form. However, to date, multiple studies suggest that TiO2 nanoparticles do not penetrate or are highly unlikely to penetrate beyond the stratum corneum.
Dr. Baumann is chief executive officer of the Baumann Cosmetic & Research Institute in Miami Beach. She founded the cosmetic dermatology center at the University of Miami in 1997. Dr. Baumann wrote the textbook "Cosmetic Dermatology: Principles and Practice" (McGraw-Hill, 2002), and a book for consumers, "The Skin Type Solution" (Bantam, 2006). Dr. Baumann has received funding for clinical grants from Allergan, Aveeno, Avon Products, Galderma, Mary Kay, Medicis Pharmaceuticals, Neutrogena, Philosophy, Stiefel, Topix Pharmaceuticals, and Unilever.
Hair washing – Too much or too little?
Many dermatologists continue to battle an overwashing epidemic. From bar soaps to antibacterial washes, dermatologists continue to educate patients that the extensive lather, the alkaline pH, and the antibacterial components of our washing rituals can strip the natural oils from the skin and leave it dry, cracked, and damaged.
This phenomenon is well reported in the literature, and industry has taken notice by developing more "no-soap" soaps than ever before.
But does the same philosophy apply to hair care practices? Hair washing is more complicated, particularly in skin of color patients.
Overwashing the hair often leads to dry hair, split ends, and the need for compensatory conditioners to replace lost moisture. In African American hair, especially that of patients who use chemical or heat treatments, the lost oil and sebum from overwashing can cause even more damage.
Many skin of color patients wash their hair infrequently to protect it from breakage, and they may use topical oils to smooth and protect the fragile hair shaft.
However, can underwashing the scalp and hair cause problems? Yes, in some cases.
You might see African American patients in your practice who are suffering from scalp folliculitis, itchy scalp, seborrheic dermatitis, or alopecia that can be traced to infrequent hair washing. The infrequency of washing and the application of oils to the hair does help the hair shaft, but the buildup of oils and sebum on the scalp itself can lead to scalp inflammation, follicular plugging, extensive seborrhea, acneiform eruptions, and folliculitis.
Depending on its level and degree, the inflammation can cause pruritus and burning of the scalp and can even lead to temporary or permanent hair loss. Although topical and oral antibiotics, topical steroids, and medicated shampoos do help, proper washing also plays an important preventative role.
For skin of color patients with some of the chronic scalp problems mentioned above, decreasing heat and chemical treatments, along with increasing hair washing to two or three times a week can help prevent scalp dermatitides without compromising the hair integrity. In addition, the use of sulfate-free shampoos, use of shampoo on the scalp only (without lathering the ends of the hair), or use of a dry shampoo between washes can help control the oil and product buildup on the scalp itself.
Ultimately, it may take some trial and error to find the right hair washing regimen for skin of color patients. Determining how often to wash the scalp depends on many patient-specific factors including ethnicity, hair type, frequency of chemical and heat treatments, cost, and level of scalp inflammation. Experimenting with new hair care products and possibly a new hairstyle also may be part of a successful treatment plan.
Dr. Talakoub is in private practice at McLean (Va.) Dermatology Center. A graduate of Boston University School of Medicine, Dr. Talakoub did her residency in dermatology at the University of California, San Francisco. She is the author of multiple scholarly articles and a textbook chapter.
Many dermatologists continue to battle an overwashing epidemic. From bar soaps to antibacterial washes, dermatologists continue to educate patients that the extensive lather, the alkaline pH, and the antibacterial components of our washing rituals can strip the natural oils from the skin and leave it dry, cracked, and damaged.
This phenomenon is well reported in the literature, and industry has taken notice by developing more "no-soap" soaps than ever before.
But does the same philosophy apply to hair care practices? Hair washing is more complicated, particularly in skin of color patients.
Overwashing the hair often leads to dry hair, split ends, and the need for compensatory conditioners to replace lost moisture. In African American hair, especially that of patients who use chemical or heat treatments, the lost oil and sebum from overwashing can cause even more damage.
Many skin of color patients wash their hair infrequently to protect it from breakage, and they may use topical oils to smooth and protect the fragile hair shaft.
However, can underwashing the scalp and hair cause problems? Yes, in some cases.
You might see African American patients in your practice who are suffering from scalp folliculitis, itchy scalp, seborrheic dermatitis, or alopecia that can be traced to infrequent hair washing. The infrequency of washing and the application of oils to the hair does help the hair shaft, but the buildup of oils and sebum on the scalp itself can lead to scalp inflammation, follicular plugging, extensive seborrhea, acneiform eruptions, and folliculitis.
Depending on its level and degree, the inflammation can cause pruritus and burning of the scalp and can even lead to temporary or permanent hair loss. Although topical and oral antibiotics, topical steroids, and medicated shampoos do help, proper washing also plays an important preventative role.
For skin of color patients with some of the chronic scalp problems mentioned above, decreasing heat and chemical treatments, along with increasing hair washing to two or three times a week can help prevent scalp dermatitides without compromising the hair integrity. In addition, the use of sulfate-free shampoos, use of shampoo on the scalp only (without lathering the ends of the hair), or use of a dry shampoo between washes can help control the oil and product buildup on the scalp itself.
Ultimately, it may take some trial and error to find the right hair washing regimen for skin of color patients. Determining how often to wash the scalp depends on many patient-specific factors including ethnicity, hair type, frequency of chemical and heat treatments, cost, and level of scalp inflammation. Experimenting with new hair care products and possibly a new hairstyle also may be part of a successful treatment plan.
Dr. Talakoub is in private practice at McLean (Va.) Dermatology Center. A graduate of Boston University School of Medicine, Dr. Talakoub did her residency in dermatology at the University of California, San Francisco. She is the author of multiple scholarly articles and a textbook chapter.
Many dermatologists continue to battle an overwashing epidemic. From bar soaps to antibacterial washes, dermatologists continue to educate patients that the extensive lather, the alkaline pH, and the antibacterial components of our washing rituals can strip the natural oils from the skin and leave it dry, cracked, and damaged.
This phenomenon is well reported in the literature, and industry has taken notice by developing more "no-soap" soaps than ever before.
But does the same philosophy apply to hair care practices? Hair washing is more complicated, particularly in skin of color patients.
Overwashing the hair often leads to dry hair, split ends, and the need for compensatory conditioners to replace lost moisture. In African American hair, especially that of patients who use chemical or heat treatments, the lost oil and sebum from overwashing can cause even more damage.
Many skin of color patients wash their hair infrequently to protect it from breakage, and they may use topical oils to smooth and protect the fragile hair shaft.
However, can underwashing the scalp and hair cause problems? Yes, in some cases.
You might see African American patients in your practice who are suffering from scalp folliculitis, itchy scalp, seborrheic dermatitis, or alopecia that can be traced to infrequent hair washing. The infrequency of washing and the application of oils to the hair does help the hair shaft, but the buildup of oils and sebum on the scalp itself can lead to scalp inflammation, follicular plugging, extensive seborrhea, acneiform eruptions, and folliculitis.
Depending on its level and degree, the inflammation can cause pruritus and burning of the scalp and can even lead to temporary or permanent hair loss. Although topical and oral antibiotics, topical steroids, and medicated shampoos do help, proper washing also plays an important preventative role.
For skin of color patients with some of the chronic scalp problems mentioned above, decreasing heat and chemical treatments, along with increasing hair washing to two or three times a week can help prevent scalp dermatitides without compromising the hair integrity. In addition, the use of sulfate-free shampoos, use of shampoo on the scalp only (without lathering the ends of the hair), or use of a dry shampoo between washes can help control the oil and product buildup on the scalp itself.
Ultimately, it may take some trial and error to find the right hair washing regimen for skin of color patients. Determining how often to wash the scalp depends on many patient-specific factors including ethnicity, hair type, frequency of chemical and heat treatments, cost, and level of scalp inflammation. Experimenting with new hair care products and possibly a new hairstyle also may be part of a successful treatment plan.
Dr. Talakoub is in private practice at McLean (Va.) Dermatology Center. A graduate of Boston University School of Medicine, Dr. Talakoub did her residency in dermatology at the University of California, San Francisco. She is the author of multiple scholarly articles and a textbook chapter.
Which facts count?
Students who spend a month with me always want a session on topical steroids, that great undiscovered world they have to know but dread to explore. They’ve all seen those tables of steroid potency based on the rabbit-ear bioassay. These run to long columns (or several pages) of small print ordering the steroid universe from the aristocracy of Class 1 ("Supernovacort" 0.015%) down through the midrange ("Mediocricort" 0.026% ointment is Class 2, while Mediocricort 0.026% cream is only Class 3), down to the humble "Trivialicort" 32%, which on a good day is just a measly Class 6. All those multisyllabic names and numbers and classes bewilder and intimidate the poor kids. Even their earnest medical-student memorization skills leave them in despair of mastering all this stuff.
I ask them to ponder a mini-scenario: Your patient was given a topical steroid cream. He says it didn’t work. List all possible explanations.
The next day we discuss their answers. Most students manage to come up with several types of reasons. Maybe the steroid didn’t work because the diagnosis was wrong. (It was a fungus.) Perhaps the condition is inherently unresponsive (like knee psoriasis). Sometimes, the patient didn’t use the cream.
Then we break down that third category. Why would a patient not use the cream? Reasons include:
• The tube was too small (15 g for a full-body rash).
• The steroid did work, but the patient thought it didn’t because the eczema came back. (Eczema comes back.)
• The patient was afraid of steroids. ("I heard they thin your skin.")
I end our session by noting that this third group (the patient didn’t use the cream) is a) intellectually uninteresting; and b) the reason behind most cases were "the steroid didn’t work." By contrast, using the wrong steroid – as defined by the fine-grained distinctions on steroid potency tables – is rarely the difference between success and failure.
I give students a list of four generics, from weak to strong, and advise them not to clutter up their brains with any others. (Since most of them are headed for primary care, those four will be plenty, freeing brain space for board memorization.)
Ever since medical school, which is a rather long time ago by now, I’ve wondered why some things are taught and others left out. More particularly, why are some kinds of facts thought to be important (the ones you can quantify or put numbers next to, for instance) and others are too squishy to mention (such as knowing what the patient thinks about the treatment)?
After all, knowing what a patient thinks about what a treatment does – how it might harm them, and what a treatment "working" really means – has a lot to do with whether the treatment is used properly, or used at all. Why isn’t that important? Because you can’t put it into a table laced with decimal points and percentages?
The tendency to reduce everything to what you can measure has been around for a long time but seems to be getting worse. I read the other day about something called the Human Connectome Project, an effort to produce data to help answer the central question, "How do differences between you and me and how our brains are wired up, relate to differences in our behaviors, our thoughts, our emotions, our feelings, and our experiences?"
I am not the first to wonder whether functional MRIs, with those gaily colored snapshots of the brain in action, really tell us more about how the brain works than does talking with the people who own those brains. The assumption seems to be that pictures of brain circuits are "real," whereas mere talk is mush, not the stuff of science, whose fruits we physicians are supposed to apply. I am wired, therefore I am.
Suppose a patient thinks that topical steroids thin the skin? Suppose she expects your eczema cream to make the rash go away once and for all, and when it comes back, she takes that as proof that it "didn’t work" and stops using it because it’s clearly worthless? Would those opinions show up on a color photo of her amygdala?
Can my patients be the only ones whose opinions about health and disease matter more, and more often, than do the tabulated measures of clinical efficacy?
You know, the real stuff you have to memorize and document, to get in and to get by.
Dr. Rockoff practices dermatology in Brookline, Mass. He is on the clinical faculty at Tufts University School of Medicine, Boston, and has taught senior medical students and other trainees for 30 years. Dr. Rockoff has contributed to the Under My Skin column in Skin & Allergy News since 1997.
Students who spend a month with me always want a session on topical steroids, that great undiscovered world they have to know but dread to explore. They’ve all seen those tables of steroid potency based on the rabbit-ear bioassay. These run to long columns (or several pages) of small print ordering the steroid universe from the aristocracy of Class 1 ("Supernovacort" 0.015%) down through the midrange ("Mediocricort" 0.026% ointment is Class 2, while Mediocricort 0.026% cream is only Class 3), down to the humble "Trivialicort" 32%, which on a good day is just a measly Class 6. All those multisyllabic names and numbers and classes bewilder and intimidate the poor kids. Even their earnest medical-student memorization skills leave them in despair of mastering all this stuff.
I ask them to ponder a mini-scenario: Your patient was given a topical steroid cream. He says it didn’t work. List all possible explanations.
The next day we discuss their answers. Most students manage to come up with several types of reasons. Maybe the steroid didn’t work because the diagnosis was wrong. (It was a fungus.) Perhaps the condition is inherently unresponsive (like knee psoriasis). Sometimes, the patient didn’t use the cream.
Then we break down that third category. Why would a patient not use the cream? Reasons include:
• The tube was too small (15 g for a full-body rash).
• The steroid did work, but the patient thought it didn’t because the eczema came back. (Eczema comes back.)
• The patient was afraid of steroids. ("I heard they thin your skin.")
I end our session by noting that this third group (the patient didn’t use the cream) is a) intellectually uninteresting; and b) the reason behind most cases were "the steroid didn’t work." By contrast, using the wrong steroid – as defined by the fine-grained distinctions on steroid potency tables – is rarely the difference between success and failure.
I give students a list of four generics, from weak to strong, and advise them not to clutter up their brains with any others. (Since most of them are headed for primary care, those four will be plenty, freeing brain space for board memorization.)
Ever since medical school, which is a rather long time ago by now, I’ve wondered why some things are taught and others left out. More particularly, why are some kinds of facts thought to be important (the ones you can quantify or put numbers next to, for instance) and others are too squishy to mention (such as knowing what the patient thinks about the treatment)?
After all, knowing what a patient thinks about what a treatment does – how it might harm them, and what a treatment "working" really means – has a lot to do with whether the treatment is used properly, or used at all. Why isn’t that important? Because you can’t put it into a table laced with decimal points and percentages?
The tendency to reduce everything to what you can measure has been around for a long time but seems to be getting worse. I read the other day about something called the Human Connectome Project, an effort to produce data to help answer the central question, "How do differences between you and me and how our brains are wired up, relate to differences in our behaviors, our thoughts, our emotions, our feelings, and our experiences?"
I am not the first to wonder whether functional MRIs, with those gaily colored snapshots of the brain in action, really tell us more about how the brain works than does talking with the people who own those brains. The assumption seems to be that pictures of brain circuits are "real," whereas mere talk is mush, not the stuff of science, whose fruits we physicians are supposed to apply. I am wired, therefore I am.
Suppose a patient thinks that topical steroids thin the skin? Suppose she expects your eczema cream to make the rash go away once and for all, and when it comes back, she takes that as proof that it "didn’t work" and stops using it because it’s clearly worthless? Would those opinions show up on a color photo of her amygdala?
Can my patients be the only ones whose opinions about health and disease matter more, and more often, than do the tabulated measures of clinical efficacy?
You know, the real stuff you have to memorize and document, to get in and to get by.
Dr. Rockoff practices dermatology in Brookline, Mass. He is on the clinical faculty at Tufts University School of Medicine, Boston, and has taught senior medical students and other trainees for 30 years. Dr. Rockoff has contributed to the Under My Skin column in Skin & Allergy News since 1997.
Students who spend a month with me always want a session on topical steroids, that great undiscovered world they have to know but dread to explore. They’ve all seen those tables of steroid potency based on the rabbit-ear bioassay. These run to long columns (or several pages) of small print ordering the steroid universe from the aristocracy of Class 1 ("Supernovacort" 0.015%) down through the midrange ("Mediocricort" 0.026% ointment is Class 2, while Mediocricort 0.026% cream is only Class 3), down to the humble "Trivialicort" 32%, which on a good day is just a measly Class 6. All those multisyllabic names and numbers and classes bewilder and intimidate the poor kids. Even their earnest medical-student memorization skills leave them in despair of mastering all this stuff.
I ask them to ponder a mini-scenario: Your patient was given a topical steroid cream. He says it didn’t work. List all possible explanations.
The next day we discuss their answers. Most students manage to come up with several types of reasons. Maybe the steroid didn’t work because the diagnosis was wrong. (It was a fungus.) Perhaps the condition is inherently unresponsive (like knee psoriasis). Sometimes, the patient didn’t use the cream.
Then we break down that third category. Why would a patient not use the cream? Reasons include:
• The tube was too small (15 g for a full-body rash).
• The steroid did work, but the patient thought it didn’t because the eczema came back. (Eczema comes back.)
• The patient was afraid of steroids. ("I heard they thin your skin.")
I end our session by noting that this third group (the patient didn’t use the cream) is a) intellectually uninteresting; and b) the reason behind most cases were "the steroid didn’t work." By contrast, using the wrong steroid – as defined by the fine-grained distinctions on steroid potency tables – is rarely the difference between success and failure.
I give students a list of four generics, from weak to strong, and advise them not to clutter up their brains with any others. (Since most of them are headed for primary care, those four will be plenty, freeing brain space for board memorization.)
Ever since medical school, which is a rather long time ago by now, I’ve wondered why some things are taught and others left out. More particularly, why are some kinds of facts thought to be important (the ones you can quantify or put numbers next to, for instance) and others are too squishy to mention (such as knowing what the patient thinks about the treatment)?
After all, knowing what a patient thinks about what a treatment does – how it might harm them, and what a treatment "working" really means – has a lot to do with whether the treatment is used properly, or used at all. Why isn’t that important? Because you can’t put it into a table laced with decimal points and percentages?
The tendency to reduce everything to what you can measure has been around for a long time but seems to be getting worse. I read the other day about something called the Human Connectome Project, an effort to produce data to help answer the central question, "How do differences between you and me and how our brains are wired up, relate to differences in our behaviors, our thoughts, our emotions, our feelings, and our experiences?"
I am not the first to wonder whether functional MRIs, with those gaily colored snapshots of the brain in action, really tell us more about how the brain works than does talking with the people who own those brains. The assumption seems to be that pictures of brain circuits are "real," whereas mere talk is mush, not the stuff of science, whose fruits we physicians are supposed to apply. I am wired, therefore I am.
Suppose a patient thinks that topical steroids thin the skin? Suppose she expects your eczema cream to make the rash go away once and for all, and when it comes back, she takes that as proof that it "didn’t work" and stops using it because it’s clearly worthless? Would those opinions show up on a color photo of her amygdala?
Can my patients be the only ones whose opinions about health and disease matter more, and more often, than do the tabulated measures of clinical efficacy?
You know, the real stuff you have to memorize and document, to get in and to get by.
Dr. Rockoff practices dermatology in Brookline, Mass. He is on the clinical faculty at Tufts University School of Medicine, Boston, and has taught senior medical students and other trainees for 30 years. Dr. Rockoff has contributed to the Under My Skin column in Skin & Allergy News since 1997.
Rate of protein synthesis affects HSC function, study suggests
in the bone marrow
Hematopoietic stem cells (HSCs) require a highly regulated rate of protein synthesis to function properly, according to research published in Nature.
Experiments showed that a ribosomal mutation decreases protein synthesis in HSCs, and deletion of a tumor suppressor gene increases protein synthesis.
But both changes result in impaired HSC function.
In mouse models, the mutation counteracted the effects of the deletion, which restored normal HSC function and delayed leukemogenesis.
“We unveiled new areas of cellular biology that no one has seen before,” said study author Sean Morrison, PhD, of the University of Texas Southwestern Medical Center in Dallas.
“This finding not only tells us something new about stem cell regulation but opens up the ability to study differences in protein synthesis between many kinds of cells in the body. We believe there is an undiscovered world of biology that allows different kinds of cells to synthesize protein at different rates and in different ways, and that those differences are important for cellular survival.”
In a previous study, researchers discovered that, by modifying the antibiotic puromycin, they could measure protein synthesis in rare cells in vivo.
Dr Morrison and his colleagues realized they could adapt this reagent to measure protein synthesis in HSCs and other cells in the hematopoietic system.
Their analyses showed that different types of blood cells produced vastly different amounts of protein per hour. And HSCs, in particular, synthesized much less protein than other hematopoietic progenitors.
“This result suggests that blood-forming stem cells require a lower rate of protein synthesis as compared to other blood-forming cells,” Dr Morrison said.
He and his colleagues then generated mice with a mutation in a component of the ribosome (Rpl24Bst/+ mice). HSCs in these mice had a 30% lower rate of protein production than controls.
The researchers observed the opposite effect when they deleted the tumor suppressor gene Pten in mouse HSCs. These mice saw a roughly 30% increase in protein production relative to controls.
However, as in the Rpl24Bst/+ mice, HSC function was noticeably impaired in these animals.
Together, these observations suggest that HSCs require a highly regulated rate of protein synthesis, such that increases or decreases in that rate impair HSC function.
“Amazingly, when the ribosomal mutant mice and the Pten mutant mice were bred together, stem cell function returned to normal, and we greatly delayed, and in some instances entirely blocked, the development of leukemia,” Dr Morrison said.
“All of this happened because protein production in stem cells was returned to normal. It was as if two wrongs made a right.”
in the bone marrow
Hematopoietic stem cells (HSCs) require a highly regulated rate of protein synthesis to function properly, according to research published in Nature.
Experiments showed that a ribosomal mutation decreases protein synthesis in HSCs, and deletion of a tumor suppressor gene increases protein synthesis.
But both changes result in impaired HSC function.
In mouse models, the mutation counteracted the effects of the deletion, which restored normal HSC function and delayed leukemogenesis.
“We unveiled new areas of cellular biology that no one has seen before,” said study author Sean Morrison, PhD, of the University of Texas Southwestern Medical Center in Dallas.
“This finding not only tells us something new about stem cell regulation but opens up the ability to study differences in protein synthesis between many kinds of cells in the body. We believe there is an undiscovered world of biology that allows different kinds of cells to synthesize protein at different rates and in different ways, and that those differences are important for cellular survival.”
In a previous study, researchers discovered that, by modifying the antibiotic puromycin, they could measure protein synthesis in rare cells in vivo.
Dr Morrison and his colleagues realized they could adapt this reagent to measure protein synthesis in HSCs and other cells in the hematopoietic system.
Their analyses showed that different types of blood cells produced vastly different amounts of protein per hour. And HSCs, in particular, synthesized much less protein than other hematopoietic progenitors.
“This result suggests that blood-forming stem cells require a lower rate of protein synthesis as compared to other blood-forming cells,” Dr Morrison said.
He and his colleagues then generated mice with a mutation in a component of the ribosome (Rpl24Bst/+ mice). HSCs in these mice had a 30% lower rate of protein production than controls.
The researchers observed the opposite effect when they deleted the tumor suppressor gene Pten in mouse HSCs. These mice saw a roughly 30% increase in protein production relative to controls.
However, as in the Rpl24Bst/+ mice, HSC function was noticeably impaired in these animals.
Together, these observations suggest that HSCs require a highly regulated rate of protein synthesis, such that increases or decreases in that rate impair HSC function.
“Amazingly, when the ribosomal mutant mice and the Pten mutant mice were bred together, stem cell function returned to normal, and we greatly delayed, and in some instances entirely blocked, the development of leukemia,” Dr Morrison said.
“All of this happened because protein production in stem cells was returned to normal. It was as if two wrongs made a right.”
in the bone marrow
Hematopoietic stem cells (HSCs) require a highly regulated rate of protein synthesis to function properly, according to research published in Nature.
Experiments showed that a ribosomal mutation decreases protein synthesis in HSCs, and deletion of a tumor suppressor gene increases protein synthesis.
But both changes result in impaired HSC function.
In mouse models, the mutation counteracted the effects of the deletion, which restored normal HSC function and delayed leukemogenesis.
“We unveiled new areas of cellular biology that no one has seen before,” said study author Sean Morrison, PhD, of the University of Texas Southwestern Medical Center in Dallas.
“This finding not only tells us something new about stem cell regulation but opens up the ability to study differences in protein synthesis between many kinds of cells in the body. We believe there is an undiscovered world of biology that allows different kinds of cells to synthesize protein at different rates and in different ways, and that those differences are important for cellular survival.”
In a previous study, researchers discovered that, by modifying the antibiotic puromycin, they could measure protein synthesis in rare cells in vivo.
Dr Morrison and his colleagues realized they could adapt this reagent to measure protein synthesis in HSCs and other cells in the hematopoietic system.
Their analyses showed that different types of blood cells produced vastly different amounts of protein per hour. And HSCs, in particular, synthesized much less protein than other hematopoietic progenitors.
“This result suggests that blood-forming stem cells require a lower rate of protein synthesis as compared to other blood-forming cells,” Dr Morrison said.
He and his colleagues then generated mice with a mutation in a component of the ribosome (Rpl24Bst/+ mice). HSCs in these mice had a 30% lower rate of protein production than controls.
The researchers observed the opposite effect when they deleted the tumor suppressor gene Pten in mouse HSCs. These mice saw a roughly 30% increase in protein production relative to controls.
However, as in the Rpl24Bst/+ mice, HSC function was noticeably impaired in these animals.
Together, these observations suggest that HSCs require a highly regulated rate of protein synthesis, such that increases or decreases in that rate impair HSC function.
“Amazingly, when the ribosomal mutant mice and the Pten mutant mice were bred together, stem cell function returned to normal, and we greatly delayed, and in some instances entirely blocked, the development of leukemia,” Dr Morrison said.
“All of this happened because protein production in stem cells was returned to normal. It was as if two wrongs made a right.”
Score can predict survival in GVHD patients
Credit: CDC
A scoring system that rates symptoms of pulmonary dysfunction can help predict survival in patients with chronic graft-vs-host disease (GVHD), new research suggests.
The National Institutes of Health (NIH) devised a scoring system whereby patients can rate their breathing difficulties on a scale of 0 to 3.
In a study of nearly 500 patients with chronic GVHD, this system proved more effective in predicting survival than other measures of pulmonary dysfunction.
A patient’s score was significantly associated with the risk of overall survival (OS) and non-relapse mortality (NRM).
Stephanie Lee, MD, MPH, of the Fred Hutchinson Cancer Research Center in Seattle, and her colleagues reported these findings in Biology of Blood and Marrow Transplantation.
The researchers evaluated the utility of pulmonary function tests (PFTs) and symptom assessment in predicting the outcomes of 496 patients with chronic GVHD. The team looked at results of PFTs and the NIH lung scoring system, which has 2 parts.
One part is the NIH symptom-based lung score, which assigns the following numbers to breathing difficulties: 0 for no symptoms, 1 for shortness of breath climbing stairs, 2 for shortness of breath on flat ground, and 3 for shortness of breath at rest or requiring oxygen.
The second part of the system is the NIH PFT-based lung score, a lung function score calculated according to a patient’s forced expiratory volume in 1 second (FEV1) and diffusing capacity of carbon monoxide (DLCO), corrected for hemoglobin but not alveolar volume.
The researchers focused on a set of hypothesized associations between pulmonary measures and NRM, OS, patient-reported outcomes, and functional status.
The 7 measures of interest were:
- Obstructive lung disease based on PFTs
- Restrictive lung disease based on PFTs
- NIH PFT-based lung score
- NIH symptom-based lung score
- Clinical diagnosis of bronchiolitis obliterans syndrome
- Decrease in FEV1 or forced vital capacity (FVC) compared to enrollment
- Worsening of NIH symptom-based lung score by 1 point or greater compared with the first recorded score.
The researchers found that only the NIH symptom-based lung score was significantly associated with NRM (P=0.02), OS (P=0.02), patient-reported symptoms (P<0.001), and functional status (P<0.001).
In addition, worsening of the NIH symptom-based lung score over time was associated with higher NRM and lower OS.
None of the other measures studied were significantly associated with OS or NRM, although some were associated with patient-reported symptoms.
“The [NIH symptom-based lung score] turned out to be the most predictive,” Dr Lee said. “It’s just a question [and], therefore, easy to do and cost-effective. No special equipment is involved.”
This suggests there’s a simple way for physicians to detect pulmonary dysfunction earlier, she added. A patient’s doctor could follow up on a poor score with tests to determine the cause of the problem and identify the appropriate treatment.
Credit: CDC
A scoring system that rates symptoms of pulmonary dysfunction can help predict survival in patients with chronic graft-vs-host disease (GVHD), new research suggests.
The National Institutes of Health (NIH) devised a scoring system whereby patients can rate their breathing difficulties on a scale of 0 to 3.
In a study of nearly 500 patients with chronic GVHD, this system proved more effective in predicting survival than other measures of pulmonary dysfunction.
A patient’s score was significantly associated with the risk of overall survival (OS) and non-relapse mortality (NRM).
Stephanie Lee, MD, MPH, of the Fred Hutchinson Cancer Research Center in Seattle, and her colleagues reported these findings in Biology of Blood and Marrow Transplantation.
The researchers evaluated the utility of pulmonary function tests (PFTs) and symptom assessment in predicting the outcomes of 496 patients with chronic GVHD. The team looked at results of PFTs and the NIH lung scoring system, which has 2 parts.
One part is the NIH symptom-based lung score, which assigns the following numbers to breathing difficulties: 0 for no symptoms, 1 for shortness of breath climbing stairs, 2 for shortness of breath on flat ground, and 3 for shortness of breath at rest or requiring oxygen.
The second part of the system is the NIH PFT-based lung score, a lung function score calculated according to a patient’s forced expiratory volume in 1 second (FEV1) and diffusing capacity of carbon monoxide (DLCO), corrected for hemoglobin but not alveolar volume.
The researchers focused on a set of hypothesized associations between pulmonary measures and NRM, OS, patient-reported outcomes, and functional status.
The 7 measures of interest were:
- Obstructive lung disease based on PFTs
- Restrictive lung disease based on PFTs
- NIH PFT-based lung score
- NIH symptom-based lung score
- Clinical diagnosis of bronchiolitis obliterans syndrome
- Decrease in FEV1 or forced vital capacity (FVC) compared to enrollment
- Worsening of NIH symptom-based lung score by 1 point or greater compared with the first recorded score.
The researchers found that only the NIH symptom-based lung score was significantly associated with NRM (P=0.02), OS (P=0.02), patient-reported symptoms (P<0.001), and functional status (P<0.001).
In addition, worsening of the NIH symptom-based lung score over time was associated with higher NRM and lower OS.
None of the other measures studied were significantly associated with OS or NRM, although some were associated with patient-reported symptoms.
“The [NIH symptom-based lung score] turned out to be the most predictive,” Dr Lee said. “It’s just a question [and], therefore, easy to do and cost-effective. No special equipment is involved.”
This suggests there’s a simple way for physicians to detect pulmonary dysfunction earlier, she added. A patient’s doctor could follow up on a poor score with tests to determine the cause of the problem and identify the appropriate treatment.
Credit: CDC
A scoring system that rates symptoms of pulmonary dysfunction can help predict survival in patients with chronic graft-vs-host disease (GVHD), new research suggests.
The National Institutes of Health (NIH) devised a scoring system whereby patients can rate their breathing difficulties on a scale of 0 to 3.
In a study of nearly 500 patients with chronic GVHD, this system proved more effective in predicting survival than other measures of pulmonary dysfunction.
A patient’s score was significantly associated with the risk of overall survival (OS) and non-relapse mortality (NRM).
Stephanie Lee, MD, MPH, of the Fred Hutchinson Cancer Research Center in Seattle, and her colleagues reported these findings in Biology of Blood and Marrow Transplantation.
The researchers evaluated the utility of pulmonary function tests (PFTs) and symptom assessment in predicting the outcomes of 496 patients with chronic GVHD. The team looked at results of PFTs and the NIH lung scoring system, which has 2 parts.
One part is the NIH symptom-based lung score, which assigns the following numbers to breathing difficulties: 0 for no symptoms, 1 for shortness of breath climbing stairs, 2 for shortness of breath on flat ground, and 3 for shortness of breath at rest or requiring oxygen.
The second part of the system is the NIH PFT-based lung score, a lung function score calculated according to a patient’s forced expiratory volume in 1 second (FEV1) and diffusing capacity of carbon monoxide (DLCO), corrected for hemoglobin but not alveolar volume.
The researchers focused on a set of hypothesized associations between pulmonary measures and NRM, OS, patient-reported outcomes, and functional status.
The 7 measures of interest were:
- Obstructive lung disease based on PFTs
- Restrictive lung disease based on PFTs
- NIH PFT-based lung score
- NIH symptom-based lung score
- Clinical diagnosis of bronchiolitis obliterans syndrome
- Decrease in FEV1 or forced vital capacity (FVC) compared to enrollment
- Worsening of NIH symptom-based lung score by 1 point or greater compared with the first recorded score.
The researchers found that only the NIH symptom-based lung score was significantly associated with NRM (P=0.02), OS (P=0.02), patient-reported symptoms (P<0.001), and functional status (P<0.001).
In addition, worsening of the NIH symptom-based lung score over time was associated with higher NRM and lower OS.
None of the other measures studied were significantly associated with OS or NRM, although some were associated with patient-reported symptoms.
“The [NIH symptom-based lung score] turned out to be the most predictive,” Dr Lee said. “It’s just a question [and], therefore, easy to do and cost-effective. No special equipment is involved.”
This suggests there’s a simple way for physicians to detect pulmonary dysfunction earlier, she added. A patient’s doctor could follow up on a poor score with tests to determine the cause of the problem and identify the appropriate treatment.
Mutant B-cell progenitor causes leukemia, group finds
Credit: Aaron Logan
Researchers have identified a cell that appears to be responsible for a particularly aggressive type of leukemia in mice.
The cell is a renin-expressing B-cell progenitor found in the bone marrow.
Renin cells, which are also present in the kidney, have traditionally been associated with the control of blood pressure and fluid balance in the body.
But investigators discovered renin progenitors in the bone marrow of mice with aggressive B-cell leukemia.
And they found evidence to suggest the leukemia originated from a mutation in these renin progenitors—specifically, deletion of RBP-J.
“We would now like to see if this is a relevant model of human disease,” said study author Brian C. Belyea, MD, of the University of Virginia (UVA) School of Medicine in Charlottesville.
“Our long-term goal is to identify cells at increased risk for leukemia in humans and, ultimately, develop strategies to monitor and eliminate these cells.”
Dr Belyea and his colleagues described their initial steps toward this goal in Nature Communications.
In a previous study, the researchers were investigating the effects of RBP-J deletion in mice. And they were surprised to find that, as the mice aged beyond 6 months, they developed signs of an aggressive form of precursor B-lymphoblastic leukemia.
So with the current study, the team wanted to characterize this leukemia. They set out to identify which cells in the bone marrow are capable of producing renin under normal circumstances and whether those cells might be the origin of the leukemia.
The investigators found that renin is expressed by a subset of B-cell progenitors in the mouse bone marrow, and these cells need RBP-J to differentiate.
Deleting RBP-J restrains lymphocyte differentiation, and the mutant cells undergo neoplastic transformation. The mice develop a B-cell leukemia characterized by multi-organ infiltration and resulting in early death.
Experiments showed the leukemia to be particularly hardy. The researchers placed the leukemic cells in a lab dish and found they continued to survive, and even thrive, without any assistance.
“People have been trying to grow leukemia cells in culture, even from patients, and they require other factors to survive, but not these,” said study author Maria Luisa S. Sequeira-Lopez, MD, of UVA.
“These are extremely aggressive in that they have developed a system to grow and survive no matter what,” added author Ariel Gomez, MD, also of UVA. “They have immortalized themselves.”
The researchers now want to determine if these findings will translate to humans. They believe it’s possible, as they were able to identify RBP-J mutations in 10 patients (of 44 screened) with hematologic malignancies. In fact, 5 of the patients had the same frameshift deletion.
Credit: Aaron Logan
Researchers have identified a cell that appears to be responsible for a particularly aggressive type of leukemia in mice.
The cell is a renin-expressing B-cell progenitor found in the bone marrow.
Renin cells, which are also present in the kidney, have traditionally been associated with the control of blood pressure and fluid balance in the body.
But investigators discovered renin progenitors in the bone marrow of mice with aggressive B-cell leukemia.
And they found evidence to suggest the leukemia originated from a mutation in these renin progenitors—specifically, deletion of RBP-J.
“We would now like to see if this is a relevant model of human disease,” said study author Brian C. Belyea, MD, of the University of Virginia (UVA) School of Medicine in Charlottesville.
“Our long-term goal is to identify cells at increased risk for leukemia in humans and, ultimately, develop strategies to monitor and eliminate these cells.”
Dr Belyea and his colleagues described their initial steps toward this goal in Nature Communications.
In a previous study, the researchers were investigating the effects of RBP-J deletion in mice. And they were surprised to find that, as the mice aged beyond 6 months, they developed signs of an aggressive form of precursor B-lymphoblastic leukemia.
So with the current study, the team wanted to characterize this leukemia. They set out to identify which cells in the bone marrow are capable of producing renin under normal circumstances and whether those cells might be the origin of the leukemia.
The investigators found that renin is expressed by a subset of B-cell progenitors in the mouse bone marrow, and these cells need RBP-J to differentiate.
Deleting RBP-J restrains lymphocyte differentiation, and the mutant cells undergo neoplastic transformation. The mice develop a B-cell leukemia characterized by multi-organ infiltration and resulting in early death.
Experiments showed the leukemia to be particularly hardy. The researchers placed the leukemic cells in a lab dish and found they continued to survive, and even thrive, without any assistance.
“People have been trying to grow leukemia cells in culture, even from patients, and they require other factors to survive, but not these,” said study author Maria Luisa S. Sequeira-Lopez, MD, of UVA.
“These are extremely aggressive in that they have developed a system to grow and survive no matter what,” added author Ariel Gomez, MD, also of UVA. “They have immortalized themselves.”
The researchers now want to determine if these findings will translate to humans. They believe it’s possible, as they were able to identify RBP-J mutations in 10 patients (of 44 screened) with hematologic malignancies. In fact, 5 of the patients had the same frameshift deletion.
Credit: Aaron Logan
Researchers have identified a cell that appears to be responsible for a particularly aggressive type of leukemia in mice.
The cell is a renin-expressing B-cell progenitor found in the bone marrow.
Renin cells, which are also present in the kidney, have traditionally been associated with the control of blood pressure and fluid balance in the body.
But investigators discovered renin progenitors in the bone marrow of mice with aggressive B-cell leukemia.
And they found evidence to suggest the leukemia originated from a mutation in these renin progenitors—specifically, deletion of RBP-J.
“We would now like to see if this is a relevant model of human disease,” said study author Brian C. Belyea, MD, of the University of Virginia (UVA) School of Medicine in Charlottesville.
“Our long-term goal is to identify cells at increased risk for leukemia in humans and, ultimately, develop strategies to monitor and eliminate these cells.”
Dr Belyea and his colleagues described their initial steps toward this goal in Nature Communications.
In a previous study, the researchers were investigating the effects of RBP-J deletion in mice. And they were surprised to find that, as the mice aged beyond 6 months, they developed signs of an aggressive form of precursor B-lymphoblastic leukemia.
So with the current study, the team wanted to characterize this leukemia. They set out to identify which cells in the bone marrow are capable of producing renin under normal circumstances and whether those cells might be the origin of the leukemia.
The investigators found that renin is expressed by a subset of B-cell progenitors in the mouse bone marrow, and these cells need RBP-J to differentiate.
Deleting RBP-J restrains lymphocyte differentiation, and the mutant cells undergo neoplastic transformation. The mice develop a B-cell leukemia characterized by multi-organ infiltration and resulting in early death.
Experiments showed the leukemia to be particularly hardy. The researchers placed the leukemic cells in a lab dish and found they continued to survive, and even thrive, without any assistance.
“People have been trying to grow leukemia cells in culture, even from patients, and they require other factors to survive, but not these,” said study author Maria Luisa S. Sequeira-Lopez, MD, of UVA.
“These are extremely aggressive in that they have developed a system to grow and survive no matter what,” added author Ariel Gomez, MD, also of UVA. “They have immortalized themselves.”
The researchers now want to determine if these findings will translate to humans. They believe it’s possible, as they were able to identify RBP-J mutations in 10 patients (of 44 screened) with hematologic malignancies. In fact, 5 of the patients had the same frameshift deletion.
Why genetic screening isn’t preventing SCD
and a normal one
Credit: Betty Pace
There may be a simple reason why genetic screening has failed to fulfill the promise of preventing sickle cell disease (SCD).
According to an article published in JAMA, it’s a lack of communication.
We’ve long had the technical capacity to screen individuals for the sickle cell trait (SCT). Yet few individuals of child-bearing age who were born in the US actually know their SCT status.
So they aren’t aware that they might pass SCT or SCD down to their children.
And this may boil down to a lack of communication among healthcare professionals, patients, and family members.
“[P]arents are routinely notified by NBS [newborn screening] programs if their child has SCD, but only 37% are notified if their child has SCT,” said author Barry Zuckerman, MD, of Boston Medical Center in Massachusetts.
Even if parents do receive SCT screening results, we don’t know whether they understand the implications or share them with their child. And counseling or referrals to genetic counsellors are not provided in a standard fashion.
Furthermore, although NBS programs notify primary care physicians of screening results at the time of birth, results may not be readily available during routine clinic visits, and patients may not have the same physician throughout their childhood.
The lack of knowledge regarding SCT status represents a missed opportunity to provide appropriate health and prenatal counseling and testing, according to Dr Zuckerman and his colleagues.
They said that timely knowledge of genetic vulnerability and genetic counseling are necessary for informed decision-making with regard to reproduction. It is important to increase the number of adolescents and young adults who know their SCT status to decrease the number of individuals inheriting SCD.
To increase awareness of SCT status and facilitate informed decision-making about reproductive options, we must do 2 things, according to the authors.
First, the results of positive screens for SCT must be communicated to primary care clinicians, recorded in the patient’s medical record as part of a problem list, and shared with parents and the individual.
And second, we must provide effective communication and information through genetic counseling on reproductive options for those with SCT.
The authors also stressed that schools and community organizations have potentially important roles in communicating the importance of SCT status to adolescents and young adults. And by working together, the healthcare system, schools, and community organizations may be able to improve SCT knowledge and awareness.
and a normal one
Credit: Betty Pace
There may be a simple reason why genetic screening has failed to fulfill the promise of preventing sickle cell disease (SCD).
According to an article published in JAMA, it’s a lack of communication.
We’ve long had the technical capacity to screen individuals for the sickle cell trait (SCT). Yet few individuals of child-bearing age who were born in the US actually know their SCT status.
So they aren’t aware that they might pass SCT or SCD down to their children.
And this may boil down to a lack of communication among healthcare professionals, patients, and family members.
“[P]arents are routinely notified by NBS [newborn screening] programs if their child has SCD, but only 37% are notified if their child has SCT,” said author Barry Zuckerman, MD, of Boston Medical Center in Massachusetts.
Even if parents do receive SCT screening results, we don’t know whether they understand the implications or share them with their child. And counseling or referrals to genetic counsellors are not provided in a standard fashion.
Furthermore, although NBS programs notify primary care physicians of screening results at the time of birth, results may not be readily available during routine clinic visits, and patients may not have the same physician throughout their childhood.
The lack of knowledge regarding SCT status represents a missed opportunity to provide appropriate health and prenatal counseling and testing, according to Dr Zuckerman and his colleagues.
They said that timely knowledge of genetic vulnerability and genetic counseling are necessary for informed decision-making with regard to reproduction. It is important to increase the number of adolescents and young adults who know their SCT status to decrease the number of individuals inheriting SCD.
To increase awareness of SCT status and facilitate informed decision-making about reproductive options, we must do 2 things, according to the authors.
First, the results of positive screens for SCT must be communicated to primary care clinicians, recorded in the patient’s medical record as part of a problem list, and shared with parents and the individual.
And second, we must provide effective communication and information through genetic counseling on reproductive options for those with SCT.
The authors also stressed that schools and community organizations have potentially important roles in communicating the importance of SCT status to adolescents and young adults. And by working together, the healthcare system, schools, and community organizations may be able to improve SCT knowledge and awareness.
and a normal one
Credit: Betty Pace
There may be a simple reason why genetic screening has failed to fulfill the promise of preventing sickle cell disease (SCD).
According to an article published in JAMA, it’s a lack of communication.
We’ve long had the technical capacity to screen individuals for the sickle cell trait (SCT). Yet few individuals of child-bearing age who were born in the US actually know their SCT status.
So they aren’t aware that they might pass SCT or SCD down to their children.
And this may boil down to a lack of communication among healthcare professionals, patients, and family members.
“[P]arents are routinely notified by NBS [newborn screening] programs if their child has SCD, but only 37% are notified if their child has SCT,” said author Barry Zuckerman, MD, of Boston Medical Center in Massachusetts.
Even if parents do receive SCT screening results, we don’t know whether they understand the implications or share them with their child. And counseling or referrals to genetic counsellors are not provided in a standard fashion.
Furthermore, although NBS programs notify primary care physicians of screening results at the time of birth, results may not be readily available during routine clinic visits, and patients may not have the same physician throughout their childhood.
The lack of knowledge regarding SCT status represents a missed opportunity to provide appropriate health and prenatal counseling and testing, according to Dr Zuckerman and his colleagues.
They said that timely knowledge of genetic vulnerability and genetic counseling are necessary for informed decision-making with regard to reproduction. It is important to increase the number of adolescents and young adults who know their SCT status to decrease the number of individuals inheriting SCD.
To increase awareness of SCT status and facilitate informed decision-making about reproductive options, we must do 2 things, according to the authors.
First, the results of positive screens for SCT must be communicated to primary care clinicians, recorded in the patient’s medical record as part of a problem list, and shared with parents and the individual.
And second, we must provide effective communication and information through genetic counseling on reproductive options for those with SCT.
The authors also stressed that schools and community organizations have potentially important roles in communicating the importance of SCT status to adolescents and young adults. And by working together, the healthcare system, schools, and community organizations may be able to improve SCT knowledge and awareness.
Warmer temperatures push malaria to higher elevations
Credit: Asnakew Yeshiwondim
Researchers say they have the first hard evidence that malaria creeps to higher elevations during warmer years and retreats to lower altitudes when temperatures cool.
The evidence comes from an analysis of highland regions in Ethiopia and Colombia.
It suggests that future climate warming will prompt a rise in malaria incidence in densely populated regions of Africa and
South America, unless efforts to monitor and control malaria are increased.
“We saw an upward expansion of malaria cases to higher altitudes in warmer years, which is a clear signal of a response by highland malaria to changes in climate,” said study author Mercedes Pascual, PhD, of the University of Michigan in Ann Arbor.
“This is indisputable evidence of a climate effect. The main implication is that, with warmer temperatures, we expect to see a higher number of people exposed to the risk of malaria in tropical highland areas like these.”
Dr Pascual and her colleagues reported these findings in Science.
It was more than 20 years ago that malaria was first identified as a disease that might be especially sensitive to climate change, because both the Plasmodium parasites that cause it and the Anopheles mosquitoes that spread it thrive as temperatures warm.
Some early studies concluded that climate change would lead to an increase in malaria cases as the disease expanded its range into higher elevations. But some of the assumptions behind those predictions were later criticized.
More recently, researchers have argued that improved socioeconomic conditions and more aggressive mosquito-control efforts will likely exert a far greater influence than climatic factors over the extent and intensity of malaria worldwide.
What’s been missing in this debate is an analysis of regional records with sufficient resolution to determine how the spatial distribution of malaria cases has changed in response to year-to-year temperature variations, especially in densely populated highlands that have historically provided havens from the disease.
So Dr Pascual and her colleagues looked for evidence of a changing spatial distribution of malaria with varying temperature in the highlands of Ethiopia and Colombia. They examined malaria case records from the Antioquia region of western Colombia from 1990 to 2005 and from the Debre Zeit area of central Ethiopia from 1993 to 2005.
By focusing solely on the altitudinal response to year-to-year temperature changes, the researchers were able to exclude other variables that can influence malaria case numbers, such as mosquito-control programs, resistance to antimalarial drugs, and fluctuations in rainfall amounts.
The team found that the median altitude of malaria cases shifted to higher elevations in warmer years and back to lower elevations in cooler years. This relatively simple analysis yielded a clear signal that can only be explained by temperature changes, the group said.
“Our latest research suggests that, with progressive global warming, malaria will creep up the mountains and spread to new high-altitude areas,” said study author Menno Bouma, MD, of the London School of Hygiene & Tropical Medicine in the UK.
“And because these populations lack protective immunity, they will be particularly vulnerable to severe morbidity and mortality.”
In addition, the study results suggest that climate change can explain malaria trends in both the highland regions in recent decades.
In the Debre Zeit region of Ethiopia, at an elevation range of between 5280 feet and 7920 feet, about 37 million people (roughly 43% of the country’s population) live in rural areas at risk of higher malaria exposure under a warming climate.
In a previous study, researchers estimated that a 1-degree temperature increase could result in an additional 3 million malaria cases annually in Ethiopia in the under-15 population, unless control efforts are strengthened.
“Our findings here underscore the size of the problem,” Dr Pascual said, “and emphasize the need for sustained intervention efforts in these regions, especially in Africa.”
Credit: Asnakew Yeshiwondim
Researchers say they have the first hard evidence that malaria creeps to higher elevations during warmer years and retreats to lower altitudes when temperatures cool.
The evidence comes from an analysis of highland regions in Ethiopia and Colombia.
It suggests that future climate warming will prompt a rise in malaria incidence in densely populated regions of Africa and
South America, unless efforts to monitor and control malaria are increased.
“We saw an upward expansion of malaria cases to higher altitudes in warmer years, which is a clear signal of a response by highland malaria to changes in climate,” said study author Mercedes Pascual, PhD, of the University of Michigan in Ann Arbor.
“This is indisputable evidence of a climate effect. The main implication is that, with warmer temperatures, we expect to see a higher number of people exposed to the risk of malaria in tropical highland areas like these.”
Dr Pascual and her colleagues reported these findings in Science.
It was more than 20 years ago that malaria was first identified as a disease that might be especially sensitive to climate change, because both the Plasmodium parasites that cause it and the Anopheles mosquitoes that spread it thrive as temperatures warm.
Some early studies concluded that climate change would lead to an increase in malaria cases as the disease expanded its range into higher elevations. But some of the assumptions behind those predictions were later criticized.
More recently, researchers have argued that improved socioeconomic conditions and more aggressive mosquito-control efforts will likely exert a far greater influence than climatic factors over the extent and intensity of malaria worldwide.
What’s been missing in this debate is an analysis of regional records with sufficient resolution to determine how the spatial distribution of malaria cases has changed in response to year-to-year temperature variations, especially in densely populated highlands that have historically provided havens from the disease.
So Dr Pascual and her colleagues looked for evidence of a changing spatial distribution of malaria with varying temperature in the highlands of Ethiopia and Colombia. They examined malaria case records from the Antioquia region of western Colombia from 1990 to 2005 and from the Debre Zeit area of central Ethiopia from 1993 to 2005.
By focusing solely on the altitudinal response to year-to-year temperature changes, the researchers were able to exclude other variables that can influence malaria case numbers, such as mosquito-control programs, resistance to antimalarial drugs, and fluctuations in rainfall amounts.
The team found that the median altitude of malaria cases shifted to higher elevations in warmer years and back to lower elevations in cooler years. This relatively simple analysis yielded a clear signal that can only be explained by temperature changes, the group said.
“Our latest research suggests that, with progressive global warming, malaria will creep up the mountains and spread to new high-altitude areas,” said study author Menno Bouma, MD, of the London School of Hygiene & Tropical Medicine in the UK.
“And because these populations lack protective immunity, they will be particularly vulnerable to severe morbidity and mortality.”
In addition, the study results suggest that climate change can explain malaria trends in both the highland regions in recent decades.
In the Debre Zeit region of Ethiopia, at an elevation range of between 5280 feet and 7920 feet, about 37 million people (roughly 43% of the country’s population) live in rural areas at risk of higher malaria exposure under a warming climate.
In a previous study, researchers estimated that a 1-degree temperature increase could result in an additional 3 million malaria cases annually in Ethiopia in the under-15 population, unless control efforts are strengthened.
“Our findings here underscore the size of the problem,” Dr Pascual said, “and emphasize the need for sustained intervention efforts in these regions, especially in Africa.”
Credit: Asnakew Yeshiwondim
Researchers say they have the first hard evidence that malaria creeps to higher elevations during warmer years and retreats to lower altitudes when temperatures cool.
The evidence comes from an analysis of highland regions in Ethiopia and Colombia.
It suggests that future climate warming will prompt a rise in malaria incidence in densely populated regions of Africa and
South America, unless efforts to monitor and control malaria are increased.
“We saw an upward expansion of malaria cases to higher altitudes in warmer years, which is a clear signal of a response by highland malaria to changes in climate,” said study author Mercedes Pascual, PhD, of the University of Michigan in Ann Arbor.
“This is indisputable evidence of a climate effect. The main implication is that, with warmer temperatures, we expect to see a higher number of people exposed to the risk of malaria in tropical highland areas like these.”
Dr Pascual and her colleagues reported these findings in Science.
It was more than 20 years ago that malaria was first identified as a disease that might be especially sensitive to climate change, because both the Plasmodium parasites that cause it and the Anopheles mosquitoes that spread it thrive as temperatures warm.
Some early studies concluded that climate change would lead to an increase in malaria cases as the disease expanded its range into higher elevations. But some of the assumptions behind those predictions were later criticized.
More recently, researchers have argued that improved socioeconomic conditions and more aggressive mosquito-control efforts will likely exert a far greater influence than climatic factors over the extent and intensity of malaria worldwide.
What’s been missing in this debate is an analysis of regional records with sufficient resolution to determine how the spatial distribution of malaria cases has changed in response to year-to-year temperature variations, especially in densely populated highlands that have historically provided havens from the disease.
So Dr Pascual and her colleagues looked for evidence of a changing spatial distribution of malaria with varying temperature in the highlands of Ethiopia and Colombia. They examined malaria case records from the Antioquia region of western Colombia from 1990 to 2005 and from the Debre Zeit area of central Ethiopia from 1993 to 2005.
By focusing solely on the altitudinal response to year-to-year temperature changes, the researchers were able to exclude other variables that can influence malaria case numbers, such as mosquito-control programs, resistance to antimalarial drugs, and fluctuations in rainfall amounts.
The team found that the median altitude of malaria cases shifted to higher elevations in warmer years and back to lower elevations in cooler years. This relatively simple analysis yielded a clear signal that can only be explained by temperature changes, the group said.
“Our latest research suggests that, with progressive global warming, malaria will creep up the mountains and spread to new high-altitude areas,” said study author Menno Bouma, MD, of the London School of Hygiene & Tropical Medicine in the UK.
“And because these populations lack protective immunity, they will be particularly vulnerable to severe morbidity and mortality.”
In addition, the study results suggest that climate change can explain malaria trends in both the highland regions in recent decades.
In the Debre Zeit region of Ethiopia, at an elevation range of between 5280 feet and 7920 feet, about 37 million people (roughly 43% of the country’s population) live in rural areas at risk of higher malaria exposure under a warming climate.
In a previous study, researchers estimated that a 1-degree temperature increase could result in an additional 3 million malaria cases annually in Ethiopia in the under-15 population, unless control efforts are strengthened.
“Our findings here underscore the size of the problem,” Dr Pascual said, “and emphasize the need for sustained intervention efforts in these regions, especially in Africa.”
Pathway may drive treatment resistance in T-ALL
Experiments in zebrafish have revealed a mechanism that may drive relapse in human T-cell acute lymphoblastic leukemia (T-ALL).
Researchers identified a subset of T-ALL cells that spontaneously acquired activation of the Akt pathway.
This increased the frequency of leukemia-propagating cells (LPCs) and mediated resistance to treatment with dexamethasone. However, adding an Akt inhibitor to treatment overcame this resistance.
“The Akt pathway appears to be a major driver of treatment resistance,” said study author David Langenau, PhD, of the Harvard Stem Cell Institute in Boston.
“We also show that this same pathway increases overall growth of leukemic cells and increases the fraction of cells capable of driving relapse.”
Dr Langenau and his colleagues described these findings in Cancer Cell.
Previous research had shown that, if LPCs are retained following treatment, they will initiate disease relapse. And LPC frequency can increase over time. However, it was not clear if this was the result of continued clonal evolution or if a clone with high LPC frequency out-competed other cells.
So Dr Langenau and his colleagues used zebrafish models to study T-ALL clones. The team observed functional variation within individual clones and identified clones that enhanced growth rate and leukemia-propagating potential with time.
A subset of these clones acquired Akt pathway activation, which increased the number of LPCs by activating mTORC1. The cells also exhibited an elevated growth rate, which may have resulted from stabilizing the Myc protein.
Furthermore, the LPCs proved resistant to treatment with dexamethasone. But the researchers were able to reverse this resistance by combining dexamethasone with the Akt inhibitor MK2206. This approach proved effective both in zebrafish models and in human T-ALL cells.
“Our work will likely help in identifying patients that are prone to relapse and would benefit from co-treatment with inhibitors of the Akt pathway and typical front-line cancer therapy,” said Jessica Blackburn, PhD, a member of Dr Langenau’s lab.
She and her colleagues are now hoping to identify other mutations that lead to relapse, thereby pinpointing potential drug targets for patients with aggressive leukemia.
Experiments in zebrafish have revealed a mechanism that may drive relapse in human T-cell acute lymphoblastic leukemia (T-ALL).
Researchers identified a subset of T-ALL cells that spontaneously acquired activation of the Akt pathway.
This increased the frequency of leukemia-propagating cells (LPCs) and mediated resistance to treatment with dexamethasone. However, adding an Akt inhibitor to treatment overcame this resistance.
“The Akt pathway appears to be a major driver of treatment resistance,” said study author David Langenau, PhD, of the Harvard Stem Cell Institute in Boston.
“We also show that this same pathway increases overall growth of leukemic cells and increases the fraction of cells capable of driving relapse.”
Dr Langenau and his colleagues described these findings in Cancer Cell.
Previous research had shown that, if LPCs are retained following treatment, they will initiate disease relapse. And LPC frequency can increase over time. However, it was not clear if this was the result of continued clonal evolution or if a clone with high LPC frequency out-competed other cells.
So Dr Langenau and his colleagues used zebrafish models to study T-ALL clones. The team observed functional variation within individual clones and identified clones that enhanced growth rate and leukemia-propagating potential with time.
A subset of these clones acquired Akt pathway activation, which increased the number of LPCs by activating mTORC1. The cells also exhibited an elevated growth rate, which may have resulted from stabilizing the Myc protein.
Furthermore, the LPCs proved resistant to treatment with dexamethasone. But the researchers were able to reverse this resistance by combining dexamethasone with the Akt inhibitor MK2206. This approach proved effective both in zebrafish models and in human T-ALL cells.
“Our work will likely help in identifying patients that are prone to relapse and would benefit from co-treatment with inhibitors of the Akt pathway and typical front-line cancer therapy,” said Jessica Blackburn, PhD, a member of Dr Langenau’s lab.
She and her colleagues are now hoping to identify other mutations that lead to relapse, thereby pinpointing potential drug targets for patients with aggressive leukemia.
Experiments in zebrafish have revealed a mechanism that may drive relapse in human T-cell acute lymphoblastic leukemia (T-ALL).
Researchers identified a subset of T-ALL cells that spontaneously acquired activation of the Akt pathway.
This increased the frequency of leukemia-propagating cells (LPCs) and mediated resistance to treatment with dexamethasone. However, adding an Akt inhibitor to treatment overcame this resistance.
“The Akt pathway appears to be a major driver of treatment resistance,” said study author David Langenau, PhD, of the Harvard Stem Cell Institute in Boston.
“We also show that this same pathway increases overall growth of leukemic cells and increases the fraction of cells capable of driving relapse.”
Dr Langenau and his colleagues described these findings in Cancer Cell.
Previous research had shown that, if LPCs are retained following treatment, they will initiate disease relapse. And LPC frequency can increase over time. However, it was not clear if this was the result of continued clonal evolution or if a clone with high LPC frequency out-competed other cells.
So Dr Langenau and his colleagues used zebrafish models to study T-ALL clones. The team observed functional variation within individual clones and identified clones that enhanced growth rate and leukemia-propagating potential with time.
A subset of these clones acquired Akt pathway activation, which increased the number of LPCs by activating mTORC1. The cells also exhibited an elevated growth rate, which may have resulted from stabilizing the Myc protein.
Furthermore, the LPCs proved resistant to treatment with dexamethasone. But the researchers were able to reverse this resistance by combining dexamethasone with the Akt inhibitor MK2206. This approach proved effective both in zebrafish models and in human T-ALL cells.
“Our work will likely help in identifying patients that are prone to relapse and would benefit from co-treatment with inhibitors of the Akt pathway and typical front-line cancer therapy,” said Jessica Blackburn, PhD, a member of Dr Langenau’s lab.
She and her colleagues are now hoping to identify other mutations that lead to relapse, thereby pinpointing potential drug targets for patients with aggressive leukemia.