Collect some World Models for Autonomous Driving papers.
Collect some World Models (for Autonomous Driving) papers.
If you find some ignored papers, feel free to create pull requests, open issues, or email me. Contributions in any form to make this list more comprehensive are welcome. 📣📣📣
If you find this repository useful, please consider citing and giving us a star 🌟.
Feel free to share this list with others! 🥳🥳🥳
CVPR 2024 Workshop & Challenge | OpenDriveLab
Track #4: Predictive World Model.
Serving as an abstract spatio-temporal representation of reality, the world model can predict future states based on the current state. The learning process of world models has the potential to elevate a pre-trained foundation model to the next level. Given vision-only inputs, the neural network outputs point clouds in the future to testify its predictive capability of the world.
CVPR 2023 Workshop on Autonomous Driving
CHALLENGE 3: ARGOVERSE CHALLENGES, 3D Occupancy Forecasting using the Argoverse 2 Sensor Dataset. Predict the spacetime occupancy of the world for the next 3 seconds.
Yann LeCun
: A Path Towards Autonomous Machine Intelligence [paper] [Video]
CVPR'23 WAD
Keynote - Ashok Elluswamy, Tesla [Video]
Wayve
Introducing GAIA-1: A Cutting-Edge Generative AI Model for Autonomy [blog]
World models are the basis for the ability to predict what might happen next, which is fundamentally important for autonomous driving. They can act as a learned simulator, or a mental “what if” thought experiment for model-based reinforcement learning (RL) or planning. By incorporating world models into our driving models, we can enable them to understand human decisions better and ultimately generalise to more real-world situations.
WACVW 2024
[Paper] [Code]2024.3, arxiv
[Paper]CVPR 2024
[Paper] [Code]CVPR 2024
[Paper] [Data]CVPR 2024
[Paper] [Code]CVPR 2024
[Paper] [Code]CVPR 2024
[Code]CVPR 2024
[Paper] [Code]ICLR 2024
[Paper] [Code]ICLR 2024
[Paper]ICLR 2024
[Paper] [Code]2024.4, arxiv
[Paper] [Code]2024.3, arxiv
[Paper]2024.3, arxiv
[Paper] [Code]2024.2, arxiv
[Paper]ICRA 2023
[Paper] [Code]2023.12, arxiv
[Paper] [Code]2023.11, arxiv
[Paper]2023.11, arxiv
[Paper] [Code]2023.11, arxiv
[Paper]2023.10, arxiv
[Paper] [Code]2023.9, arxiv
[Paper]2023.9, arxiv
[Paper]2023.9, arxiv
[Paper] [Code]2023.8, arxiv
[Paper] [Code]NeurIPS 2022
[Paper] [Code]ICRA 2022
[Paper]IROS 2022
[Paper]DeepMind
[Paper] [Blog]OpenAI
[Technical report]Meta AI
[Paper]Meta AI
[Blog] [Paper] [Code]ICLR 2024
[Paper] [Code]2024.4, arxiv
[Paper] [Code]2024.3, arxiv
[Paper] [Code]2024.3, arxiv
[Paper] [Code]2024.2, arxiv
[Paper] [Code]OpenReview
[Paper]2024.1, arxiv
[Paper] [Code]