Don’t miss the latest developments in business and finance.

Russian scientists to give AI systems human hearing

Image
IANS Moscow
Last Updated : Dec 20 2018 | 5:50 PM IST

Russian scientists have come closer to creating a digital system to process speech in real-life sound environment, for example, when several people talk simultaneously during a conversation.

A team from the Peter the Great St. Petersburg Polytechnic University (SPbPU) simulated the process of the sensory sounds coding by modelling the mammalian auditory periphery.

In the study, the team developed methods for acoustic signal recognition based on peripheral coding.

They partially reproduced the processes performed by the nervous system while processing information and integrated into a decision-making module, which determines the type of the incoming signal.

"The main goal is to give the machine human-like hearing to achieve the corresponding level of machine perception of acoustic signals in the real-life environment," said lead author Anton Yakovenko from SPbPU.

According to Yakovenko, the examples of the responses to vowel phonemes given by the auditory nerve model created by the scientists are represented in the source dataset.

More From This Section

Data processing was carried out by a special algorithm which conducted structural analysis to identify the neural activity patterns the model used to recognise each phoneme. The proposed approach combines self-organising neural networks and graph theory.

According to the scientists, analysis of the reaction of the auditory nerve fibres allowed to identify vowel phonemes correctly under significant noise exposure and surpassed the most common methods for parameterisation of acoustic signals.

The researchers believe that the methods developed should help create a new generation of neurocomputer interfaces, as well as "provide better human-machine interaction".

In this regard, the study has a great potential for practical application: in cochlear implantation (surgical restoration of hearing), separation of sound sources, creation of new bioinspired approaches for speech processing, recognition and computational auditory scene analysis based on the machine hearing principles.

"The algorithms for processing and analysing big data implemented within the research framework are universal and can be implemented to solve the tasks that are not related to acoustic signal processing," Yakovenko said.

--IANS

rt/mag/sed

Also Read

First Published: Dec 20 2018 | 5:40 PM IST

Next Story