GRI: General Reinforced Imitation and Its Application to Vision-Based Autonomous Driving
Résumé
Deep reinforcement learning (DRL) has been demonstrated to be effective for several complex decision-making applications, such as autonomous driving and robotics. However, DRL is notoriously limited by its high sample complexity and its lack of stability. Prior knowledge, e.g., as expert demonstrations, is often available but challenging to leverage to mitigate these issues. In this paper, we propose General Reinforced Imitation (GRI), a novel method which combines benefits from exploration and expert data and is straightforward to implement over any off-policy RL algorithm. We make one simplifying hypothesis: expert demonstrations can be seen as perfect data whose underlying policy gets a constant high reward. Based on this assumption, GRI introduces the notion of offline demonstration agent. This agent sends expert data which are processed both concurrently and indistinguishably with the experiences coming from the online RL exploration agent. We show that our approach enables major improvements on camera-based autonomous driving in urban environments. We further validate the GRI method on Mujoco continuous control tasks with different off-policy RL algorithms. Our method ranked first on the CARLA Leaderboard and outperforms World on Rails, the previous state-of-the-art method, by 17%.
Fichier principal
robotics-12-00127.pdf (6.12 Mo)
Télécharger le fichier
127.html (306.15 Ko)
Télécharger le fichier
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|