Measuring the Impact of Memory Replay in Training Pacman Agents using Reinforcement Learning

 

Guardado en:
書目詳細資料
Autores: Fallas Moya, Fabián, Duncan, Jeremiah, Samuel, Tabitha, Sadovnik, Amir
格式: comunicación de congreso
Fecha de Publicación:2021
實物特徵:Reinforcement Learning has been widely applied to play classic games where the agents learn the rules by playing the game by themselves. Recent works in general Reinforcement Learning use many improvements such as memory replay to boost the results and training time but we have not found research that focuses on the impact of memory replay in agents that play simple classic video games. In this research, we present an analysis of the impact of three different techniques of memory replay in the performance of a Deep Q-Learning model using different levels of difficulty of the Pacman video game. Also, we propose a multi-channel image - a novel way to create input tensors for training the model - inspired by one-hot encoding, and we show in the experiment section that the performance is improved by using this idea. We find that our model is able to learn faster than previous work and is even able to learn how to consistently win on the mediumClassic board after only 3,000 training episodes, previously thought to take much longer.
País:Kérwá
機構:Universidad de Costa Rica
Repositorio:Kérwá
語言:Inglés
OAI Identifier:oai:kerwa.ucr.ac.cr:10669/102295
在線閱讀:https://hdl.handle.net/10669/102295
https://doi.org/10.1109/CLEI53233.2021.9640031
Palabra clave:reinforcement learning
deep learning
memory replay
Q-Learning