End To End Autonomous Driving Save

A collection of recent resources on End-to-End Autonomous Driving [survey accepted in IEEE TIV]

Project README

End-to-End Autonomous Driving

end-to-end autonomous driving is a promising paradigm as it circumvents the drawbacks associated with modular systems, such as their overwhelming complexity and propensity for error propagation. Autonomous driving transcends conventional traffic patterns by proactively recognizing critical events in advance, ensuring passengers’ safety and providing them with comfortable transportation, particularly in highly stochastic and variable traffic settings.


Recent Advancements in End-to-End Autonomous Driving using Deep Learning: A Survey

JJ's Medium IEEE JJ's Medium

Authors: Pranav Singh Chib, Pravendra Singh

Modular architecture is a widely used approach in autonomous driving systems, which divides the driving pipeline into discrete sub-tasks. This architecture relies on individual sensors and algorithms to process data and generate control outputs. In contrast, the End-to-End autonomous driving approach streamlines the system, improving efficiency and robustness by directly mapping sensory input to control outputs. The benefits of End-to-End autonomous driving have garnered significant attention in the research community.

This repo contains a curated list of resources on End-to-End Autonomous Driving, arranged chronologically. We regularly update it with the latest papers and their corresponding open-source implementations.

Table of Contents


LEARNING APPROACHES

The following are the different learning approaches of End-to-End Driving

Imitation learning

Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving. [CVPR2023]
Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, Hongyang Li
GitHub

Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling [ICLR2023]
Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao

GitHub

Hidden Biases of End-to-End Driving Models [ICCV2023]
Bernhard Jaeger, Kashyap Chitta, Andreas Geiger
GitHub

Scaling Vision-based End-to-End Autonomous Driving with Multi-View Attention Learning [IROS 2023]
Yi Xiao, Felipe Codevilla, Diego Porres, Antonio M. Lopez

Learning from All Vehicles [CVPR2022]
Dian Chen, Philipp Krähenbühl
GitHub

PlanT: Explainable Planning Transformers via Object-Level Representations [CoRL2022]
Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger
GitHub

Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [CVPR2021]
Aditya Prakash, Kashyap Chitta, Andreas Geiger
GitHub

Learning by Watching [CVPR2021]
Jimuyang Zhang, Eshed Ohn-Bar

End-to-End Urban Driving by Imitating a Reinforcement Learning Coach [ICCV2021]
Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, Luc Van Gool
GitHub

Learning by Cheating [CoRL2020]
Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl
GitHub

SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning [[CoRL2020]]
Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto
GitHub

Urban Driving with Conditional Imitation Learning [ICRA2020]
Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall

Multimodal End-to-End Autonomous Driving [TITS2020]
Yi Xiao, Felipe Codevilla, Akhil Gurram, Onay Urfalioglu, Antonio M. López

Learning to Drive from Simulation without Real World Labels [ICRA2019]
Alex Bewley, Jessica Rigley, Yuxuan Liu, Jeffrey Hawke, Richard Shen, Vinh-Dieu Lam, Alex Kendall

Behavioural cloning

TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving [TPAMI2022]
Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger
GitHub

Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline [NeurIPS2022]
Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, Yu Qiao
GitHub

KING: Generating Safety-Critical Driving Scenarios for Robust Imitation via Kinematics Gradients [ECCV2022]
Niklas Hanselmann, Katrin Renz, Kashyap Chitta, Apratim Bhattacharyya, Andreas Geiger
GitHub

Learning to Drive by Watching YouTube Videos: Action-Conditioned Contrastive Policy Pretraining [ECCV2022]
Qihang Zhang, Zhenghao Peng, Bolei Zhou
GitHub

NEAT: Neural Attention Fields for End-to-End Autonomous Driving [ICCV2021]
Kashyap Chitta, Aditya Prakash, Andreas Geiger
GitHub

Learning Situational Driving [CVPR2020]
Eshed Ohn-Bar, Aditya Prakash, Aseem Behl, Kashyap Chitta, Andreas Geiger

Exploring the Limitations of Behavior Cloning for Autonomous Driving [ICCV2019]
Felipe Codevilla, Eder Santana, Antonio M. López, Adrien Gaidon
GitHub

Reinforcement learning

Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization [ICLR2022]
Quanyi Li, Zhenghao Peng, Bolei Zhou
GitHub

End-to-End Urban Driving by Imitating a Reinforcement Learning Coach [ICCV2021]
Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, Luc Van Gool
GitHub

Learning To Drive From a World on Rails [ICCV2021]
Dian Chen, Vladlen Koltun, Philipp Krähenbühl
GitHub

End-to-End Model-Free Reinforcement Learning for Urban Driving Using Implicit Affordances [CVPR2020]
Marin Toromanoff, Emilie Wirbel, Fabien Moutarde

Learning to drive in a day [ICRA2019]
Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, Amar Shah
GitHub

Multi-task learning

Planning-oriented Autonomous Driving :trophy:Best Paper [CVPR2023]
Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li
GitHub

ReasonNet: End-to-End Driving with Temporal and Global Reasoning [CVPR2023]
Hao Shao, Letian Wang, Ruobing Chen, Steven L. Waslander, Hongsheng Li, Yu Liu

Coaching a Teachable Student [CVPR2023]
Jimuyang Zhang, Zanming Huang, Eshed Ohn-Bar

Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving. [CVPR2023]
Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, Hongyang Li
GitHub

Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer [CoRL2022]
Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu
GitHub

SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning [[CoRL2020]]
Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto
GitHub

Urban Driving with Conditional Imitation Learning [ICRA2020]
Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall

Knowledge Distillation

Learning from All Vehicles [CVPR2022]
Dian Chen, Philipp Krähenbühl
GitHub

End-to-End Urban Driving by Imitating a Reinforcement Learning Coach [ICCV2021]
Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, Luc Van Gool
GitHub

Learning To Drive From a World on Rails [ICCV2021]
Dian Chen, Vladlen Koltun, Philipp Krähenbühl
GitHub

Learning by Cheating [CoRL2020]
Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl
GitHub

SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning [[CoRL2020]]
Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto
GitHub

Other Learning

ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning [ECCV2022]
Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, Dacheng Tao
GitHub

🔼 Back to top


EXPLAINABILITY

Post-hoc saliency methods

Attention

Planning-oriented Autonomous Driving :trophy:Best Paper [CVPR2023]
Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li
GitHub

Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling [ICLR2023]
Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao

Scaling Vision-based End-to-End Autonomous Driving with Multi-View Attention Learning [IROS 2023]
Yi Xiao, Felipe Codevilla, Diego Porres, Antonio M. Lopez

TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving [TPAMI2022]
Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger
GitHub

PlanT: Explainable Planning Transformers via Object-Level Representations [CoRL2022]
Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger
GitHub

Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [CVPR2021]
Aditya Prakash, Kashyap Chitta, Andreas Geiger
GitHub

NEAT: Neural Attention Fields for End-to-End Autonomous Driving [ICCV2021]
Kashyap Chitta, Aditya Prakash, Andreas Geiger
GitHub

Semantic representation and Auxiliary output

Learning from All Vehicles [CVPR2022]
Dian Chen, Philipp Krähenbühl
GitHub

TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving [TPAMI2022]
Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger
GitHub

ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning [ECCV2022]
Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, Dacheng Tao
GitHub

Counterfactual explanation

Attention

Planning-oriented Autonomous Driving :trophy:Best Paper [CVPR2023]
Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li
GitHub

Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer [CoRL2022]
Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu
GitHub

PlanT: Explainable Planning Transformers via Object-Level Representations [CoRL2022]
Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger
GitHub

NEAT: Neural Attention Fields for End-to-End Autonomous Driving [ICCV2021]
Kashyap Chitta, Aditya Prakash, Andreas Geiger
GitHub

Semantic representation and Auxiliary output

Hidden Biases of End-to-End Driving Models [arXiv2023]
Bernhard Jaeger, Kashyap Chitta, Andreas Geiger
GitHub

TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving [TPAMI2022]
Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger
GitHub

Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer [CoRL2022]
Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu
GitHub

Learning Situational Driving [CVPR2020]
Eshed Ohn-Bar, Aditya Prakash, Aseem Behl, Kashyap Chitta, Andreas Geiger

🔼 Back to top


EVALUATION

Open Loop

Close Loop


CARLA LEADERBOARD 1.0 UNTIL AUGUST 2023

Rank Submission DS RC IP CP CV CL RLI SSI OI RD AB Type (E/M)
% % [0,1] infractions/km End/Modular
1 ReasonNet: End-to-End Driving with Temporal and Global Reasoning 79.95 89.89 0.89 0.02 0.13 0.01 0.08 0.00 0.04 0.00 0.33 E
2 Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer 76.18 88.23 0.84 0.04 0.37 0.14 0.22 0.00 0.13 0.00 0.43 E
3 Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline 75.14 85.63 0.87 0.00 0.32 0.00 0.09 0.00 0.04 0.00 0.54 E
4 Hidden Biases of End-to-End Driving Models 66.32 78.57 0.84 0.00 0.50 0.00 0.01 0.00 0.12 0.00 0.71 E
5 Learning from All Vehicles 61.85 94.46 0.64 0.04 0.70 0.02 0.17 0.00 0.25 0.09 0.10 E
6 TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving 61.18 86.69 0.71 0.04 0.81 0.01 0.05 0.00 0.23 0.00 0.43 E
7 Latent TransFuser 45.20 66.31 0.72 0.02 1.11 0.02 0.05 0.00 0.16 0.00 1.82 E
8 GRIAD 36.79 61.85 0.60 0.00 2.77 0.41 0.48 0.00 1.39 1.11 0.84 E
9 TransFuser+ 34.58 69.84 0.56 0.04 0.70 0.03 0.75 0.00 0.18 0.00 2.41 E
10 Learning To Drive From a World on Rails 31.37 57.65 0.56 0.61 1.35 1.02 0.79 0.00 0.96 1.69 0.47 E
11 End-to-End Model-Free Reinforcement Learning for Urban Driving Using Implicit Affordances 24.98 46.97 0.52 0.00 2.33 2.47 0.55 0.00 1.82 1.44 0.94 E
12 NEAT: Neural Attention Fields for End-to-End Autonomous Driving 21.83 41.71 0.65 0.04 0.74 0.62 0.70 0.00 2.68 0.00 5.22 E

SAFETY

Training on Critical Scenarios

unprotected turnings at intersections, pedestrians emerging from occluded regions, aggressive lane-changing, and other safety heuristics.

KING: Generating Safety-Critical Driving Scenarios for Robust Imitation via Kinematics Gradients [ECCV2022]
Niklas Hanselmann, Katrin Renz, Kashyap Chitta, Apratim Bhattacharyya, Andreas Geiger
GitHub

Learning from All Vehicles [CVPR2022]
Dian Chen, Philipp Krähenbühl
GitHub

Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [CVPR2021]
Aditya Prakash, Kashyap Chitta, Andreas Geiger
GitHub

Safety Constraints Integration

safety cost function, avoiding unsafe maneuvers and collision avoidance strategies.

Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving. [CVPR2023]
Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, Hongyang Li
GitHub

Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling [ICLR2023]
Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao

TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving [TPAMI2022]
Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger
GitHub

Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization [ICLR2022]
Quanyi Li, Zhenghao Peng, Bolei Zhou
GitHub

Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer [CoRL2022]
Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu
GitHub

ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning [ECCV2022]
Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, Dacheng Tao
GitHub

Learning To Drive From a World on Rails [ICCV2021]
Dian Chen, Vladlen Koltun, Philipp Krähenbühl
GitHub

SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning [[CoRL2020]]
Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto
GitHub

Additional Safety Modules

Preventing deviations from safe operation.

Planning-oriented Autonomous Driving :trophy:Best Paper [CVPR2023]
Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li
GitHub

PlanT: Explainable Planning Transformers via Object-Level Representations [CoRL2022]
Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger
GitHub

Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline [NeurIPS2022]
Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, Yu Qiao
GitHub

Large Language Models in autonomous driving

Dataset

Dataset Reasoning Outlook Size
BDD-X 2018 Description Planning Description & Justification 8M frames, 20k text strings
HAD HRI Advice 2019 Advice Goal-oriented & stimulus-driven advice 5,675 video clips, 45k text strings
Talk2Car 2019 Description Goal Point Description 30k frames, 10k text strings
DRAMA 2022 Description QA + Captions 18k frames, 100k text strings
nuScenes-QA 2023 QA Perception Result 30k frames, 460k QA pairs
DriveLM-2023 QA + Scene Descriptio Perception, Prediction and Planning with Logic 30k frames, 600k QA pairs

Citation

If you find the listing and survey useful for your work, please cite the paper:


@ARTICLE{10258330,
  author={Chib, Pranav Singh and Singh, Pravendra},
  journal={IEEE Transactions on Intelligent Vehicles}, 
  title={Recent Advancements in End-to-End Autonomous Driving using Deep Learning: A Survey}, 
  year={2023},
  volume={},
  number={},
  pages={1-18},
  doi={10.1109/TIV.2023.3318070}}

🔼 Back to top

Open Source Agenda is not affiliated with "End To End Autonomous Driving" Project. README Source: Pranav-chib/End-to-End-Autonomous-Driving