User login
SAN FRANCISCO – The use of three different screening instruments to gauge behavioral development in children up to 5 years of age has yielded results that vary within a single practice and between different practices. This heterogeneity complicates the accurate and early identification of developmental disorders children in the primary care setting.
“The burden of diagnostic services that go along with developmental screening depends on the number of positive screens and the referral completion rate. These rates may vary markedly across practices that from the outside seem relatively homogeneous. This differential burden may help explain the variation between practices that has been observed,” said Radley Sheldrick, PhD, of Boston University School of Public Health.
The American Academy of Pediatrics has recommended the use of developmental screening instruments that have a track record in prior studies of a sensitivity and specificity of at least 70% each. Children who score positive can receive further services. The aim is laudable, Dr. Sheldrick said, but little is known of how different screens compare to one another in the results obtained, and the consistency of their performance in different practice settings.
A few years ago, Dr. Sheldrick and his colleagues at Tufts Medical Center, Boston, initiated the Screen Early, Screen Accurately for Child Well-Being (SESAW) head-to-head comparison of the effectiveness of three sets of developmental behavioral screening instruments used in the pediatric primary care setting: the Ages and Stages Questionnaire, 2nd edition (ASQ-2), Parent’s Evaluation of Developmental Status (PEDS), and the Survey of Well-Being of Young Children (SWYC).
The ASQ-2 and PEDS instruments have been in use for some time. Differences in their sensitivity and specificity of developmental concerns have been noted, although both can be used at the discretion of the physician. SWYC is a more recent instrument, which was developed at Tufts Medical Center. It was designed to be easy to read and quickly completed.
In the study, 1,000 parents of children aged 9 months to 5.5 years were enrolled at six pediatric practices in Massachusetts. About 50% of the children were boys, 10% were Hispanic, and 10% were African American. About one-quarter of the parents were receiving some form of public assistance. The parents completed the three screens. Children scoring positive on any screen were assessed further.
The researchers were especially interested in the agreement between the three screens and the variance across the six practices in the performance of the screens and the proportion of children who tested positive and actually received referral care.
Overall, about 44% of the children scored positive on at least one screen. Of these, 72% were assessed more comprehensively. A closer look at those who were assessed revealed agreement between all three screens in only 16% of the children.
The performance of the three screens was not consistent from practice to practice. Variations were evident with each screen in the different practices, and between the three screens in individual practices. The differences in the performance of the screens in the individual practices were not significantly different. However, considerable difference was noted between practices, the extreme being a 70% higher difference in one practice compared to another.
Referral completion rates also displayed variation between practices, although no significant difference was evident. Still, the extreme case was a 30% higher rate of completion of one of the practices, compared with another.
“As I’ve gotten further into this research, I’ve become struck by the number of things we don’t know about developmental screens [compared to] what we do know. Whether, for example, the sensitivity and specificity of a screen in one population carries over to other populations is an assumption we have made, but which we don’t really know,” said Dr. Sheldrick.
Another unknown is whether a developmental disorder identified by a screen at one age can be identified at a later age in someone who has not received specialized care.
Finally, the issue of false positive results is vexing. While a false positive might be suspected, not to do anything sends the wrong message.
“What to do when there is a problem between a clinical result and a screening result is one of the most important clinical questions we have right now. Clinicians have to make up their minds on this issue every day, and there is not a lot of research on it. The results need to be evaluated while recognizing that there are still some uncertainties with screening results, and recognizing other forms of information, such as parent reporting and observations of the child, that can be informative,” explained Dr. Sheldrick.
Tufts Medical Center sponsored the study, which was funded by the National Institutes of Health. Dr. Sheldrick reported having no relevant financial disclosures.
SAN FRANCISCO – The use of three different screening instruments to gauge behavioral development in children up to 5 years of age has yielded results that vary within a single practice and between different practices. This heterogeneity complicates the accurate and early identification of developmental disorders children in the primary care setting.
“The burden of diagnostic services that go along with developmental screening depends on the number of positive screens and the referral completion rate. These rates may vary markedly across practices that from the outside seem relatively homogeneous. This differential burden may help explain the variation between practices that has been observed,” said Radley Sheldrick, PhD, of Boston University School of Public Health.
The American Academy of Pediatrics has recommended the use of developmental screening instruments that have a track record in prior studies of a sensitivity and specificity of at least 70% each. Children who score positive can receive further services. The aim is laudable, Dr. Sheldrick said, but little is known of how different screens compare to one another in the results obtained, and the consistency of their performance in different practice settings.
A few years ago, Dr. Sheldrick and his colleagues at Tufts Medical Center, Boston, initiated the Screen Early, Screen Accurately for Child Well-Being (SESAW) head-to-head comparison of the effectiveness of three sets of developmental behavioral screening instruments used in the pediatric primary care setting: the Ages and Stages Questionnaire, 2nd edition (ASQ-2), Parent’s Evaluation of Developmental Status (PEDS), and the Survey of Well-Being of Young Children (SWYC).
The ASQ-2 and PEDS instruments have been in use for some time. Differences in their sensitivity and specificity of developmental concerns have been noted, although both can be used at the discretion of the physician. SWYC is a more recent instrument, which was developed at Tufts Medical Center. It was designed to be easy to read and quickly completed.
In the study, 1,000 parents of children aged 9 months to 5.5 years were enrolled at six pediatric practices in Massachusetts. About 50% of the children were boys, 10% were Hispanic, and 10% were African American. About one-quarter of the parents were receiving some form of public assistance. The parents completed the three screens. Children scoring positive on any screen were assessed further.
The researchers were especially interested in the agreement between the three screens and the variance across the six practices in the performance of the screens and the proportion of children who tested positive and actually received referral care.
Overall, about 44% of the children scored positive on at least one screen. Of these, 72% were assessed more comprehensively. A closer look at those who were assessed revealed agreement between all three screens in only 16% of the children.
The performance of the three screens was not consistent from practice to practice. Variations were evident with each screen in the different practices, and between the three screens in individual practices. The differences in the performance of the screens in the individual practices were not significantly different. However, considerable difference was noted between practices, the extreme being a 70% higher difference in one practice compared to another.
Referral completion rates also displayed variation between practices, although no significant difference was evident. Still, the extreme case was a 30% higher rate of completion of one of the practices, compared with another.
“As I’ve gotten further into this research, I’ve become struck by the number of things we don’t know about developmental screens [compared to] what we do know. Whether, for example, the sensitivity and specificity of a screen in one population carries over to other populations is an assumption we have made, but which we don’t really know,” said Dr. Sheldrick.
Another unknown is whether a developmental disorder identified by a screen at one age can be identified at a later age in someone who has not received specialized care.
Finally, the issue of false positive results is vexing. While a false positive might be suspected, not to do anything sends the wrong message.
“What to do when there is a problem between a clinical result and a screening result is one of the most important clinical questions we have right now. Clinicians have to make up their minds on this issue every day, and there is not a lot of research on it. The results need to be evaluated while recognizing that there are still some uncertainties with screening results, and recognizing other forms of information, such as parent reporting and observations of the child, that can be informative,” explained Dr. Sheldrick.
Tufts Medical Center sponsored the study, which was funded by the National Institutes of Health. Dr. Sheldrick reported having no relevant financial disclosures.
SAN FRANCISCO – The use of three different screening instruments to gauge behavioral development in children up to 5 years of age has yielded results that vary within a single practice and between different practices. This heterogeneity complicates the accurate and early identification of developmental disorders children in the primary care setting.
“The burden of diagnostic services that go along with developmental screening depends on the number of positive screens and the referral completion rate. These rates may vary markedly across practices that from the outside seem relatively homogeneous. This differential burden may help explain the variation between practices that has been observed,” said Radley Sheldrick, PhD, of Boston University School of Public Health.
The American Academy of Pediatrics has recommended the use of developmental screening instruments that have a track record in prior studies of a sensitivity and specificity of at least 70% each. Children who score positive can receive further services. The aim is laudable, Dr. Sheldrick said, but little is known of how different screens compare to one another in the results obtained, and the consistency of their performance in different practice settings.
A few years ago, Dr. Sheldrick and his colleagues at Tufts Medical Center, Boston, initiated the Screen Early, Screen Accurately for Child Well-Being (SESAW) head-to-head comparison of the effectiveness of three sets of developmental behavioral screening instruments used in the pediatric primary care setting: the Ages and Stages Questionnaire, 2nd edition (ASQ-2), Parent’s Evaluation of Developmental Status (PEDS), and the Survey of Well-Being of Young Children (SWYC).
The ASQ-2 and PEDS instruments have been in use for some time. Differences in their sensitivity and specificity of developmental concerns have been noted, although both can be used at the discretion of the physician. SWYC is a more recent instrument, which was developed at Tufts Medical Center. It was designed to be easy to read and quickly completed.
In the study, 1,000 parents of children aged 9 months to 5.5 years were enrolled at six pediatric practices in Massachusetts. About 50% of the children were boys, 10% were Hispanic, and 10% were African American. About one-quarter of the parents were receiving some form of public assistance. The parents completed the three screens. Children scoring positive on any screen were assessed further.
The researchers were especially interested in the agreement between the three screens and the variance across the six practices in the performance of the screens and the proportion of children who tested positive and actually received referral care.
Overall, about 44% of the children scored positive on at least one screen. Of these, 72% were assessed more comprehensively. A closer look at those who were assessed revealed agreement between all three screens in only 16% of the children.
The performance of the three screens was not consistent from practice to practice. Variations were evident with each screen in the different practices, and between the three screens in individual practices. The differences in the performance of the screens in the individual practices were not significantly different. However, considerable difference was noted between practices, the extreme being a 70% higher difference in one practice compared to another.
Referral completion rates also displayed variation between practices, although no significant difference was evident. Still, the extreme case was a 30% higher rate of completion of one of the practices, compared with another.
“As I’ve gotten further into this research, I’ve become struck by the number of things we don’t know about developmental screens [compared to] what we do know. Whether, for example, the sensitivity and specificity of a screen in one population carries over to other populations is an assumption we have made, but which we don’t really know,” said Dr. Sheldrick.
Another unknown is whether a developmental disorder identified by a screen at one age can be identified at a later age in someone who has not received specialized care.
Finally, the issue of false positive results is vexing. While a false positive might be suspected, not to do anything sends the wrong message.
“What to do when there is a problem between a clinical result and a screening result is one of the most important clinical questions we have right now. Clinicians have to make up their minds on this issue every day, and there is not a lot of research on it. The results need to be evaluated while recognizing that there are still some uncertainties with screening results, and recognizing other forms of information, such as parent reporting and observations of the child, that can be informative,” explained Dr. Sheldrick.
Tufts Medical Center sponsored the study, which was funded by the National Institutes of Health. Dr. Sheldrick reported having no relevant financial disclosures.
AT PAS 17
Key clinical point: There were significant differences in the results of different developmental screening instruments within and between practices in a comparative study.
Major finding: The PEDS, ASQ-2, and SWYC developmental screens coidentified only 16% of 317 children aged 9 months to 5.5 years, with significant differences in the results of each developmental screen between practices.
Data source: The Screen Early, Screen Accurately for Child Well-Being (SESAW) head-to-head comparative effectiveness trial of three sets of developmental behavioral screening instruments used in pediatric primary care.
Disclosures: Tufts Medical Center sponsored the study, which was funded by the National Institutes of Health. Dr. Sheldrick reported having no relevant financial disclosures.