Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models
SlotFormer: Unsupervised Visual Dynamics Simulation with Object-Centric Models
Ziyi Wu,
Nikita Dvornik,
Klaus Greff,
Thomas Kipf,
Animesh Garg
ICLR'23 |
GitHub |
arXiv |
Project page
Ground-Truth Our Prediction | Ground-Truth Our Prediction |
---|---|
This is the official PyTorch implementation for paper: SlotFormer: Unsupervised Visual Dynamics Simulation with Object-Centric Models, which is accepted by ICLR 2023. The code contains:
Please refer to install.md for step-by-step guidance on how to install the packages.
This codebase is tailored to Slurm GPU clusters with preemption mechanism. For the configs, we mainly use RTX6000 with 24GB memory (though many experiments don't require so much memory). Please modify the code accordingly if you are using other hardware settings:
scripts/train.py
and change the fields marked by TODO:
Please refer to data.md for steps to download and pre-process each dataset.
Please see benchmark.md for detailed instructions on how to reproduce our results in the paper.
Please cite our paper if you find it useful in your research:
@article{wu2022slotformer,
title={SlotFormer: Unsupervised Visual Dynamics Simulation with Object-Centric Models},
author={Wu, Ziyi and Dvornik, Nikita and Greff, Klaus and Kipf, Thomas and Garg, Animesh},
journal={arXiv preprint arXiv:2210.05861},
year={2022}
}
We thank the authors of Slot-Attention, slot_attention.pytorch, SAVi, RPIN and Aloe for opening source their wonderful works.
SlotFormer is released under the MIT License. See the LICENSE file for more details.
If you have any questions about the code, please contact Ziyi Wu [email protected]