DeepMind demonstrated with AlphaGo that artificial intelligence research has progressed further along than was expected in our lifetimes. The Alphabet division is now tackling imagination — “a distinctly human ability” — to create AIs that are better at handling the complexity and unpredictability of the real world.
The London-based research group calls imagination a “powerful tool of human cognition” that allows for the visualization of consequences. In one example, DeepMind describes the human ability to realize the danger of placing a glass on the edge of a table:
When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.
DeepMind argues that AIs need to be able to imagine and reason about the future in order to develop “sophisticated behaviors.” In the past, AlphaGo has been able to use an “internal model” to “analyse how actions lead to future outcomes in order to reason and plan.”
However, these models excelled in Go because games follow clearly defined rules that can be programmed and accurately predicted. In comparison, reality is vastly different:
But the real world is complex, rules are not so clearly defined and unpredictable problems often arise. Even for the most intelligent agents, imagining in these complex environments is a long and costly process.
A neural network known as an “imagination encoder” extracts information that will be useful for future decisions. While ignoring irrelevant information, these imagination-augmented agents are efficient and can learn different strategies to construct a plan.
DeepMind again used games that require forward planning and reasoning to test these new architectures. The puzzle game Sokoban features irreversible moves, while another spaceship navigation game has the AI stabilize a craft with as few thruster fires as possible, while accounting for gravitational pull.
This latter game is a “highly nonlinear complex continuous control task.” In these tests, the AI can only try each level once, thus encouraging them to first imagine different strategies before applying.
The end results are promising with imagination-augmented agents outperforming standard AIs while learning on less experience and working more efficiently. The addition of a “manager” component for constructing plans led to further efficiencies. However, we are still a while away from the sci-fi concept of AI:
[F]urther analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about – and plan – for the future.
Note: We are not writer of this article, original source is mentioned below.