Reinforcement learning (RL) is a technique in which an AI agent interacts with an environment and learns a policy based on the rewards that it receives during this interaction. Progress in RL has been dramatically demonstrated by human-level performance on games like Atari. The key to this progress was generating large amounts of data using game simulators.
There are two hurdles in translating this progress into real-world applications such as assembly-line robots or robots that help the elderly in their homes. First, robots are complex and fragile; learning by taking random actions could damage the robot or its surroundings.
Second, the environment in which a robot operates is often different from the one it was trained for. A self-driving car, for instance, might have to work in a different part of the city from the one in which it was trained. How can we build learning machines that can handle new scenarios?
In a paper that we will present at the International Conference on Learning Representations, we describe a new reinforcement learning algorithm named MQL (for meta-Q-learning) that enables an AI agent to quickly adapt to new variations of familiar tasks.
Learning to learn
With MQL, as with other “meta-learning” algorithms, an agent is trained on a large number of related tasks — e.g., how to pick up objects of different shapes — and then tested on how well it learns new variations of those tasks.
MQL has two key differences. The first is that during training, the agent learns to compute a context variable specific to each task. This enables it to learn different models for different tasks: picking up a coffee cup, for instance, is much different from picking up a soccer ball.
Second, during testing, MQL uses a statistical technique called propensity estimation to search its training data for past interactions that look similar to those from the new task it’s learning. This allows MQL to adapt to the new task with minimal interactions.
Consider the robot above, which wants to learn to pick up objects. In the RL framework, the robot would try to pick up the objects; it would get a reward every time it successfully picked one up and a penalty if it dropped it.
Over repeated trials, the robot learns a policy that enables it to pick up all the objects in the training set. It is likely to do better, however, if that policy includes different interaction models for different objects.
This is the first key idea behind MQL: the robot learns a context that differentiates the model for the mug from that of the soccer ball. MQL uses a gated-recurrent-unit (GRU) neural network to create a representation of the task, and the system as a whole is conditioned on that representation.
Reusing data
The context helps the system predict a model for handling a new task — say, picking up a bottle of water. Adapting that model, however, can still require a large number of training samples. This brings in the second key component of MQL: its use of propensity estimation.
A propensity score indicates the odds that a given sample came from either of two distributions. MQL uses propensity estimation to decide which parts of the training data are close to the test task data: picking up a bottle, for instance, is closer to picking up a mug than to picking up a soccer ball. The model can then sample from the relevant training data, augmenting the data from the new task in order to adapt more efficiently.
We also used propensity estimation in our “P3O: Policy-on Policy-off Policy Optimization”, which we presented at the Conference on Uncertainty in Artificial Intelligence (UAI) in July 2019. There, too, the technique helped reduce the number of samples required to train reinforcement learning algorithms.
As AI systems tackle larger and larger sets of applications, the amount of data available for training begins to feel small. Techniques like MQL are a way to bootstrap the learning of new tasks from existing data and dramatically reduce the data requirements for training AI systems.