Deep Reinforcement Learning Autonomous Driving Using Lidar in the Physical World

   page       attach   
abstract

Autonomous cars have been in the making for over 15 years. Skepticism has taken the place of initial hype and enthusiasm. The dream of fully autonomous cars has been delayed as current self-driving systems, like ones from major players such as Tesla and Waymo, rely on supervised learning which requires countless labeled data and humans in the loop during the training phase. In this paper as well as in our previous work, we consider robot-drivers as teen-drivers eager to learn how to drive but prone to mistakes in the beginning. The question we are trying to investigate is “what if we allow autonomous cars to make mistakes like young human drives do?”

The deep reinforcement learning approach gives us newer possibilities to solve complex control tasks. It let the agent learn by interacting with the environment and from its mistakes. Unfortunately, transferring learning from simulations to the real world is a hard problem. In this paper, we explore the use of lidar data as input of a Deep Q-Network on a realistic 1/10 scale car prototype capable of performing training in real-time. The robot-driver learns how to run in a circuit by exploiting the experience gained in the real world through a mechanism of rewards designed to quickly help our robot-teen to learn its driving skills.

outcomes