Artificial intelligence has entered a new era, with its own artificial intelligence research teams working to develop and optimize new algorithms that can make more intelligent, better, and more flexible computers.
The strategy is taking hold.
Artificial Intelligence (AI) has entered its own era, as researchers are working to create artificial intelligence strategies that can improve and optimize the software that powers the machines that power computers, such as the computer systems powering smartphones, televisions, smartwatches, and cars.
In a recent article published in the Journal of Artificial Intelligence Research, researchers from the Massachusetts Institute of Technology and the University of Michigan presented the first paper on this emerging field.
Artificial intelligence is evolving to a point where the computational tools and algorithms needed to improve the systems of a computer are rapidly improving, said Steven Schulte, a professor of computer science and engineering at MIT.
The challenge is to understand how that process is occurring and how to use that information to optimize the systems that are operating on the computers.
In the future, AI systems will be more capable and capable of more complex tasks than the machines they replace, Schultes said.
This will have a profound impact on how humans interact with computers.
“The goal of this paper is to build a model that helps us understand how this process is happening,” Schultee said.
“This is the first step toward understanding how to get computers to be more intelligent.”
The study involved creating a computer model to simulate the way computers operate and that it would take to improve a computer’s performance.
To do that, the researchers created two artificial intelligence models that were optimized for the task of making intelligent computers.
One model ran on a version of a supercomputer called IBM’s Watson supercomputer, while the other simulated a version running on a much smaller system called the Epson xD.
The computer model simulated the process of computing a large number of numbers.
“When the computer is working on a task, the processor can do lots of calculations and the computer can perform one calculation at a time, and that allows the processor to make lots of decisions in a short time,” said study co-author Matthew Gellis, an assistant professor of information systems at MIT’s Sloan School of Management.
The first paper focused on optimizing the performance of the Watson model by using a new technique called “supervised learning,” which uses a set of rules to automatically assign values to parameters in a computer program.
This technique was used to improve some of Watson’s performance at finding information in a database.
The researchers then compared the Watson program’s performance to a version that ran on IBM’s other supercomputer running on an open source version of the same code.
“We have been very impressed with Watson’s results, and we think they are very promising,” said Eileen J. Soh, a research scientist in IBM’s Computer Science and Artificial Intelligence Group.
“It is possible to get results from this process, and it is really promising that we can apply it to machine learning.”
In their model, the two AI models were able to achieve significantly better performance than the one running on the open source code.
This improved performance, the authors say, may have applications for improving computer vision, speech recognition, and machine learning, as well as in medical diagnostics and medical devices.
Schultedes and Soh believe that the researchers have identified two key characteristics of a machine that can be optimized for artificial intelligence.
“These two characteristics are what are necessary for a system to perform well,” Schulz said.
The authors used this new technique to predict what computer vision tasks a human would do best.
They were able predict, for example, how well a human could perform at finding a photo on a smartphone or at identifying a photo of a cat.
The second key characteristic is a method for making computer programs that perform certain tasks more efficiently.
This type of optimization has been used for more than a decade to create computer programs for video game consoles and computer games.
However, the new technique could be used for other tasks, such in medical diagnostic and medical device development.
The study was supported by the National Science Foundation.
The paper is titled “A computational framework for optimal learning of reinforcement learning.”
It can be accessed at: https://doi.org/10.1126/sci.aai.246961.
The work was funded by the Advanced Research Projects Agency-Energy (ARPA-E).
The MIT Media Lab is part of the MIT Media Network, the Harvard John A. Paulson School of Engineering and Applied Sciences, and the Sloan School.