In a major breakthrough in speech recognition, researchers at Microsoft claim to have developed the first technology that recognises the words in a conversation as well as humans do.
A team of researchers and engineers in Microsoft Artificial Intelligence and Research created a speech recognition system that makes the same or fewer errors than professional transcriptionists.
They reported a word error rate (WER) of 5.9 per cent, down from the 6.3 per cent WER the team reported just last month.
The 5.9 per cent error rate is about equal to that of people who were asked to transcribe the same conversation, and it is the lowest ever recorded against the industry standard Switchboard speech recognition task.
"We've reached human parity. This is a historic achievement," Xuedong Huang, the company's chief speech scientist said in a blog post.
The milestone means that, for the first time, a computer can recognise the words in a conversation as well as a person would.
More From This Section
In doing so, the team beat a goal they set less than a year ago - and greatly exceeded everyone else's expectations as well.
The research milestone comes after decades of research in speech recognition, beginning in the early 1970s with DARPA, the US agency tasked with making technology breakthroughs.
Over the decades, most major technology companies and many research organisations joined in the pursuit.
"This accomplishment is the culmination of over twenty years of effort," said Geoffrey Zweig, who manages the Speech and Dialogue research group.
The milestone will have broad implications for consumer and business products that can be significantly augmented by speech recognition.
That includes consumer entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription and personal digital assistants such as Cortana.
"This will make Cortana more powerful, making a truly intelligent assistant possible," Shum said.
The research milestone does not mean the computer recognised every word perfectly. In fact, humans do not do that, either.
Instead, it means that the error rate - or the rate at which the computer misheard a word like "have" for "is" or "a" for "the" - is the same as you would expect from a person hearing the same conversation.
Zweig attributed the accomplishment to the systematic use of the latest neural network technology in all aspects of the system.
The push that got the researchers over the top was the use of neural language models in which words are represented as continuous vectors in space, and words like "fast" and "quick" are close together.
"This lets the models generalise very well from word to word," Zweig said.