π GPTSwarm: LLM agents as (Optimizable) Graphs
π GPTSwarm is a graph-based framework for LLM-based agents, providing two high-level features:
π₯ [05/01] GPTSwarm has been accepted by ICML2024.
π₯ [03/01] GPTSwarm can be installed via pip now: pip install gptswarm
π₯ [02/27] Our academic paper: Language Agents as Optimizable Graphs is released.
Here is the edge optimization process that updates edge probabilities toward improvement of the benchmark score. Notice that within an agent, the edges are fixed, whereas the inter-agent connections are getting optimized towards either edge pruning (value 0, blue) or creation (value 1, red).
At a granular level, GPTSwarm is a library that includes the following components:
Module | Description |
---|---|
swarm.environment | Domain-specific operations, agents, tools, and tasks |
swarm.graph | Graph-related functions for creating and executing agent graphs and swarm composite graphs |
swarm.llm | Interface for selecting LLM backends and calculating their operational costs |
swarm.memory | Index-based memory |
swarm.optimizer | Optimization algorithms designed to enhance agent performance and overall swarm efficiency |
Clone the repo
git clone https://github.com/metauto-ai/GPTSwarm.git
cd GPTSwarm/
Install packages
conda create -n swarm python=3.10
conda activate swarm
pip install poetry
poetry install
You should add API keys in .env.template
and change its name to .env
OPENAI_API_KEY="" # for OpenAI LLM backend
SEARCHAPI_API_KEY="" # for Web Search
Getting started with GPTSwarm is easy. Quickly run a predefined swarm
from swarm.graph.swarm import Swarm
swarm = Swarm(["IO", "IO", "IO"], "gaia")
task = "What is the capital of Jordan?"
inputs = {"task": task}
answer = await swarm.arun(inputs)
or make use of tools, such as the file analyzer
from swarm.graph.swarm import Swarm
swarm = Swarm(["IO", "TOT"], "gaia")
task = "Tell me more about this image and summarize it in 3 sentences."
files = ["./datasets/demos/js.png"]
inputs = {"task": task, "files": files}
danswer = swarm.run(inputs)
Check out the minimal Swarm example in Colab here: .
See how to create a custom Agent and run a Swarm with it here: .
Here is a Youtube video on how to run the demo notebooks:
π₯π₯π₯ See our experiments for more advanced use of our framework.
We support local LM inference via LM Studio. Download their desktop app for Mac or Windows, choose a model from the Huggingface repository and start the server. Use model_name='lmstudio'
in GPTSwarm code to run with the local LLM.
Please read our developer document if you are interested in contributing.
Please cite our paper if you find the library useful or interesting.
@article{zhuge2024language,
title={Language Agents as Optimizable Graphs},
author={Zhuge, Mingchen and Wang, Wenyi and Kirsch, Louis and Faccio, Francesco and Khizbullin, Dmitrii and Schmidhuber, Jurgen},
journal={arXiv preprint arXiv:2402.16823},
year={2024}
}