Retrieval-Augmented Reinforcement Learning
Article Status
Published
Authors/contributors
- Goyal, Anirudh (Author)
- Friesen, Abram L. (Author)
- Banino, Andrea (Author)
- Weber, Theophane (Author)
- Ke, Nan Rosemary (Author)
- Badia, Adria Puigdomenech (Author)
- Guez, Arthur (Author)
- Mirza, Mehdi (Author)
- Humphreys, Peter C. (Author)
- Konyushkova, Ksenia (Author)
- Sifre, Laurent (Author)
- Valko, Michal (Author)
- Osindero, Simon (Author)
- Lillicrap, Timothy (Author)
- Heess, Nicolas (Author)
- Blundell, Charles (Author)
Title
Retrieval-Augmented Reinforcement Learning
Abstract
Most deep reinforcement learning (RL) algorithms distill experience into parametric behavior policies or value functions via gradient updates. While effective, this approach has several disadvantages: (1) it is computationally expensive, (2) it can take many updates to integrate experiences into the parametric model, (3) experiences that are not fully integrated do not appropriately influence the agent's behavior, and (4) behavior is limited by the capacity of the model. In this paper we explore an alternative paradigm in which we train a network to map a dataset of past experiences to optimal behavior. Specifically, we augment an RL agent with a retrieval process (parameterized as a neural network) that has direct access to a dataset of experiences. This dataset can come from the agent's past experiences, expert demonstrations, or any other relevant source. The retrieval process is trained to retrieve information from the dataset that may be useful in the current context, to help the agent achieve its goal faster and more efficiently. he proposed method facilitates learning agents that at test-time can condition their behavior on the entire dataset and not only the current state, or current trajectory. We integrate our method into two different RL agents: an offline DQN agent and an online R2D2 agent. In offline multi-task problems, we show that the retrieval-augmented DQN agent avoids task interference and learns faster than the baseline DQN agent. On Atari, we show that retrieval-augmented R2D2 learns significantly faster than the baseline R2D2 agent and achieves higher scores. We run extensive ablations to measure the contributions of the components of our proposed method.
Repository
arXiv
Archive ID
arXiv:2202.08417
Date
2022-05-24
Accessed
13/12/2023, 21:19
Library Catalogue
Extra
arXiv:2202.08417 [cs]
Citation
Goyal, A., Friesen, A. L., Banino, A., Weber, T., Ke, N. R., Badia, A. P., Guez, A., Mirza, M., Humphreys, P. C., Konyushkova, K., Sifre, L., Valko, M., Osindero, S., Lillicrap, T., Heess, N., & Blundell, C. (2022). Retrieval-Augmented Reinforcement Learning (arXiv:2202.08417). arXiv. http://arxiv.org/abs/2202.08417
Link to this record