Researchers at MIT and the University of Waterloo have developed a machine that is able to understand and even imitate human speech, with the goal of making AI as intelligent as a person.
The AI is built on the principles of deep learning, which allows computers to learn from the actions of others.
The new AI, which can be used to understand human speech and understand and act on its own, is part of the ongoing work at MIT to develop deep learning systems that could be used in the near future.
“Our goal is to make artificial intelligence as smart and powerful as a natural language,” said Jari Kallio, a graduate student in MIT’s Department of Electrical Engineering and Computer Science.
Kallio was part of a group of researchers working with a machine dubbed Deep Speech, which is able both to understand natural language and act in accordance with it.
In order to make the AI as powerful as it is, Kallios said, he used the same approach that has been used in machine learning, using data from an array of thousands of videos recorded by humans.
Deep Speech is able read the context of human speech using deep learning algorithms, and then combine that data with data from other videos recorded in the same way.
It then learns from both to produce a more complete understanding of the speech.
It can then combine this data with machine-learning algorithms to understand how people speak, the researchers said in a statement.
“If you look at a human voice and you think, ‘oh, this person is really talented’, you may have thought this person was an expert, but the fact that they are actually able to get a more nuanced understanding of a human is quite impressive,” Kallino said.
The machine uses the same deep learning techniques that have been used to make deep learning machines, and is able “to learn from a huge amount of data”, he said.
Kallsio said the work is a first step toward a goal of creating an AI capable of speaking as well as understanding natural language.
“When we think of artificial intelligence or artificial intelligence systems, we think about machines that are capable of thinking,” Kallsio explained.
“But in the real world, we often don’t think about systems that are able to read human speech or understand and understand what a human wants to say.
We think about computers that can solve complex problems.”
Deep Speech has been tested on a number of videos from a number, including one that includes the rapper Eminem.
The researchers said they hope to make their AI even smarter.
“We are still working on making Deep Speech more intelligent and capable of reading human speech.
The goal is that the machine is capable of learning from a wide range of sources to understand more and more of the language spoken by the human,” Kalla said.