π€ The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
Repository address: https://github.com/Skylark0924/Rofunc
Documentation: https://rofunc.readthedocs.io/
Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including
demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an
IsaacGym
and OmniIsaacGym
based robot simulator for evaluation. This package aims to advance the field by building a full-process
toolkit and validation platform that simplifies and standardizes the process of demonstration data collection,
processing, learning, and its deployment on robots.
RofuncRL
.RofuncRL
.RofuncRL
: A modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks is released. It has been tested with simulators like OpenAIGym
, IsaacGym
, OmniIsaacGym
(see example gallery), and also differentiable simulators like PlasticineLab
and DiffCloth
.
Please refer to the installation guide.
To give you a quick overview of the pipeline of rofunc
, we provide an interesting example of learning to play Taichi
from human demonstration. You can find it in the Quick start
section of the documentation.
Note β : Achieved π: Reformatting β: TODO
Data | Learning | P&C | Tools | Simulator | |||||
---|---|---|---|---|---|---|---|---|---|
xsens.record |
β | DMP |
β | LQT |
β | config |
β | Franka |
β |
xsens.export |
β | GMR |
β | LQTBi |
β | logger |
β | CURI |
β |
xsens.visual |
β | TPGMM |
β | LQTFb |
β | datalab |
β | CURIMini |
π |
opti.record |
β | TPGMMBi |
β | LQTCP |
β | robolab.coord |
β | CURISoftHand |
β |
opti.export |
β | TPGMM_RPCtl |
β | LQTCPDMP |
β | robolab.fk |
β | Walker |
β |
opti.visual |
β | TPGMM_RPRepr |
β | LQR |
β | robolab.ik |
β | Gluon |
π |
zed.record |
β | TPGMR |
β | PoGLQRBi |
β | robolab.fd |
β | Baxter |
π |
zed.export |
β | TPGMRBi |
β | iLQR |
π | robolab.id |
β | Sawyer |
π |
zed.visual |
β | TPHSMM |
β | iLQRBi |
π | visualab.dist |
β | Humanoid |
β |
emg.record |
β | RLBaseLine(SKRL) |
β | iLQRFb |
π | visualab.ellip |
β | Multi-Robot |
β |
emg.export |
β | RLBaseLine(RLlib) |
β | iLQRCP |
π | visualab.traj |
β | ||
mmodal.record |
β | RLBaseLine(ElegRL) |
β | iLQRDyna |
π | oslab.dir_proc |
β | ||
mmodal.sync |
β | BCO(RofuncIL) |
π | iLQRObs |
π | oslab.file_proc |
β | ||
BC-Z(RofuncIL) |
β | MPC |
β | oslab.internet |
β | ||||
STrans(RofuncIL) |
β | RMP |
β | oslab.path |
β | ||||
RT-1(RofuncIL) |
β | ||||||||
A2C(RofuncRL) |
β | ||||||||
PPO(RofuncRL) |
β | ||||||||
SAC(RofuncRL) |
β | ||||||||
TD3(RofuncRL) |
β | ||||||||
CQL(RofuncRL) |
β | ||||||||
TD3BC(RofuncRL) |
β | ||||||||
DTrans(RofuncRL) |
β | ||||||||
EDAC(RofuncRL) |
β | ||||||||
AMP(RofuncRL) |
β | ||||||||
ASE(RofuncRL) |
β | ||||||||
ODTrans(RofuncRL) |
β |
RofuncRL
is one of the most important sub-packages of Rofunc
. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like OpenAIGym
, IsaacGym
, OmniIsaacGym
(see example gallery), and also differentiable simulators like PlasticineLab
and DiffCloth
. Here is a list of robot tasks trained by RofuncRL
:
Note
You can customize your own project based on RofuncRL by following the RofuncRL customize tutorial.
We also provide a RofuncRL-based repository template to generate your own repository following the RofuncRL structure by one click.
For more details, please check the documentation for RofuncRL.
Tasks | Animation | Performance | ModelZoo |
---|---|---|---|
Ant | β | ||
Cartpole | |||
Franka Cabinet |
β | ||
Franka CubeStack |
|||
CURI Cabinet |
β | ||
CURI CabinetImage |
|||
CURI CabinetBimanual |
|||
CURIQbSoftHand SynergyGrasp |
β | ||
Humanoid | β | ||
HumanoidAMP Backflip |
β | ||
HumanoidAMP Walk |
β | ||
HumanoidAMP Run |
β | ||
HumanoidAMP Dance |
β | ||
HumanoidAMP Hop |
β | ||
HumanoidASE GetupSwordShield |
β | ||
HumanoidASE PerturbSwordShield |
β | ||
HumanoidASE HeadingSwordShield |
β | ||
HumanoidASE LocationSwordShield |
β | ||
HumanoidASE ReachSwordShield |
β | ||
HumanoidASE StrikeSwordShield |
β | ||
BiShadowHand BlockStack |
β | ||
BiShadowHand BottleCap |
β | ||
BiShadowHand CatchAbreast |
β | ||
BiShadowHand CatchOver2Underarm |
β | ||
BiShadowHand CatchUnderarm |
β | ||
BiShadowHand DoorOpenInward |
β | ||
BiShadowHand DoorOpenOutward |
β | ||
BiShadowHand DoorCloseInward |
β | ||
BiShadowHand DoorCloseOutward |
β | ||
BiShadowHand GraspAndPlace |
β | ||
BiShadowHand LiftUnderarm |
β | ||
BiShadowHand HandOver |
β | ||
BiShadowHand Pen |
β | ||
BiShadowHand PointCloud |
|||
BiShadowHand PushBlock |
β | ||
BiShadowHand ReOrientation |
β | ||
BiShadowHand Scissors |
β | ||
BiShadowHand SwingCup |
β | ||
BiShadowHand Switch |
β | ||
BiShadowHand TwoCatchUnderarm |
β |
If you use rofunc in a scientific publication, we would appreciate citations to the following paper:
@software{liu2023rofunc,
title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
author = {Liu, Junjia and Li, Chenzui and Delehelle, Donatien and Li, Zhihao and Chen, Fei},
year = {2023},
publisher = {Zenodo},
doi = {10.5281/zenodo.10016946},
url = {https://doi.org/10.5281/zenodo.10016946},
dimensions = {true},
google_scholar_id = {0EnyYjriUFMC},
}
@article{liu2022robot,
title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={2},
pages={5159--5166},
year={2022},
publisher={IEEE}
}
@inproceedings{liu2023softgpt,
title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={4920--4925},
year={2023},
organization={IEEE}
}
@article{liu2023birp,
title={BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration},
author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Chen, Fei},
journal={arXiv preprint arXiv:2307.05933},
year={2023}
}
Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.
We would like to acknowledge the following projects: