Skip to main content
Image
Reinforcement Learning based Control of Imitative Policies for Near‑Accident Driving
Automated Driving
Machine Learning
Reinforcement Learning based Control of Imitative Policies for Near‑Accident Driving 1 Minute Read

TRI Authors: Allan Raventos, Adrien Gaidon, Guy Rosman

All Authors: Cao, Zhangjie, Erdem Biyik, Woodrow Wang, Allan Raventos, Adrien Gaidon, Guy Rosman, and Dorsa Sadigh

Autonomous driving has achieved significant progress in recent years, but autonomous cars are still unable to tackle high-risk situations where a potential accident is likely. In such near-accident scenarios, even a minor change in the vehicle's actions may result in drastically different consequences. To avoid unsafe actions in near-accident scenarios, we need to fully explore the environment. However, reinforcement learning (RL) and imitation learning (IL), two widely-used policy learning methods, cannot model rapid phase transitions and are not scalable to fully cover all the states. To address driving in near-accident scenarios, we propose a hierarchical reinforcement and imitation learning (H-ReIL) approach that consists of low-level policies learned by IL for discrete driving modes, and a high-level policy learned by RL that switches between different driving modes. Our approach exploits the advantages of both IL and RL by integrating them into a unified learning framework. Experimental results and user studies suggest our approach can achieve higher efficiency and safety compared to other methods. Analyses of the policies demonstrate our high-level policy appropriately switches between different low-level policies in near-accident driving situations. Read More

Citation: Cao, Zhangjie, Erdem Biyik, Woodrow Wang, Allan Raventos, Adrien Gaidon, Guy Rosman, and Dorsa Sadigh, "Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving," Robotics: Science and Systems (RSS) (2020).