Researchers at the Massachusetts Institute of Technology (MIT) have developed a computer system that can automatically screen young children for speech and language disorders and potentially provide specific diagnoses.
Accordinng to the study, an early-childhood intervention for children with speech and language disorders can make a difference in their later academic and social success.
To build the new system, researchers have used a computer that looks for patterns in the large sets of training data to diagnose speech and language disorders.
The system analyses the audio recordings of children on a standardised storytelling test, in which they are presented with a series of images and an accompanying narrative, and then asked to retell the story in their own words.
"The really exciting idea here is to be able to do screening in a fully automated way using very simplistic tools. You could imagine the storytelling task being totally done with a tablet or a phone. I think this opens up the possibility of low-cost screening for large numbers of children," said John Guttag, former Professor at the MIT.
The researchers evaluated the system's performance using a standard measure called the 'area under the curve', which describes a trade-off between the system's capacity to detect the number of people with a particular disorder, and its limitations becasue of false positives.
"Assessing children's speech is particularly challenging because of high levels of variation even among typically developing children. You get five clinicians in the room and you might get five different answers," Guttag added.
More From This Section
Unlike speech impediments, speech and language disorders both have neurological bases. But the investigators explain, they have affect different neural pathways -- speech disorders affect the motor pathways, while language disorders affect the cognitive and linguistic pathways.
The researchers had hypothesised that pauses in children's speech, as they struggled to either find a word or string together the motor controls required to produce it, were a source of useful diagnostic data.
They identified a set of 13 acoustic features of children's speech that their machine-learning system could search, seeking patterns that correlated with particular diagnoses. These were things like the number of short and long pauses, the average length of the pauses, the variability of their length, and similar statistics on uninterrupted utterances.
The machine-learning system was trained on three different tasks, which are identifying any impairment, whether speech or language; identifying language impairments; and identifying speech impairments.
Children, whose performances on the storytelling task were recorded in the data set, can be classified as 'typically developing', 'suffering from a language impairment', or 'suffering from a speech impairment' after the diagnosis is complete.