Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Wenlong Huang1, Pieter Abbeel1, Deepak Pathak*2, Igor Mordatch*3 (*equal advising)
1University of California, Berkeley, 2Carnegie Mellon University, 3Google Brain
This is the official demo code for our Language Models as Zero-Shot Planners paper. The code demonstrates how Large Language Models, such as GPT-3 and Codex, can generate action plans for complex human activities (e.g. "make breakfast"), even without any further training. The code can be used with any available language models from OpenAI API and Huggingface Transformers with a common interface.
If you find this work useful in your research, please cite using the following BibTeX:
@article{huang2022language,
title={Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents},
author={Huang, Wenlong and Abbeel, Pieter and Pathak, Deepak and Mordatch, Igor},
journal={arXiv preprint arXiv:2201.07207},
year={2022}
}
git clone https://github.com/huangwl18/language-planner.git
cd language-planner/
conda create --name language-planner-env python=3.6.13
conda activate language-planner-env
pip install --upgrade pip
pip install -r requirements.txt
See demo.ipynb
(or ) for a complete walk-through of our method. Feel free to experiment with any household tasks that you come up with (or any tasks beyond household domain if you provide necessary actions in available_actions.json
)!
Note:
available_actions.json
. The actions should support a large variety of household tasks. However, you may modify or replace this file if you're interested in a different set of actions or a different domain of tasks (beyond household domain).available_examples.json
. Feel free to change this file for a different set of available examples.