Create LLM agents with long-term memory and custom tools 📚🦙
🐜 Bug-fix release
~/.memgpt/config
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1337
0.3.14
+ strip version from server yaml by @cpacker in https://github.com/cpacker/MemGPT/pull/1334
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.13...0.3.14
Please note the dev portal is in alpha and this is not an official release!
This adds support for viewing the dev portal when the MemGPT service is running. You can view the dev portal on memgpt.localhost
(if running with docker) or localhost:8283
(if running with memgpt server
).
Make sure you install MemGPT with pip install pymemgpt
and run memgpt quickstart [--backend openai]
or memgpt configure
before running the server.
There are two options to deploy the server:
Option 1: Run with docker compose
git clone [email protected]:cpacker/MemGPT.git
docker compose up
memgpt.localhost
in the browser to view the developer portalOption 2: Run with the CLI:
memgpt server
localhost:8283
in the browser to view the developer portalconfig/server_config.yaml
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1292
autoflake
and isort
) by @cpacker in https://github.com/cpacker/MemGPT/pull/1300
embedding_model
null issue in tests by @cpacker in https://github.com/cpacker/MemGPT/pull/1305
create(..)
call to LLMs to not require AgentState
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1307
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.12...0.3.13
🐳 Cleaned up workflow for creating a MemGPT service with docker compose up
:
http://memgpt.localhost
docker compose -f dev-compose.yaml up --build
(built from local code).pgdata
foldercompose.yaml
)🪲 Bugfixes for Groq API and server
/completions
by @cpacker in https://github.com/cpacker/MemGPT/pull/1288
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.11...0.3.12
🚰 We now support streaming in the CLI when using OpenAI (+ OpenAI proxy) endpoints! You can turn on streaming mode with memgpt run --stream
memgpt configure
and add functionality for modifying humans/presets more clearly by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1253
ChatCompletionResponse
to make model
field optional by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1258
memgpt/memgpt-server:latest
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1267
-d
flag to docker compose up
for tests by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1268
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.10...0.3.11
We added support for Anthropic, Cohere, and Groq!
Message
objects to dicts for local LLMs by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1251
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.9...0.3.10
This PR add Google AI Gemini Pro support for MemGPT, as well as Python 3.12 support.
Setting up Gemini with MemGPT configure:
> memgpt configure
Loading config from /Users/loaner/.memgpt/config
? Select LLM inference provider: google_ai
? Enter your Google AI (Gemini) API key (see https://aistudio.google.com/app/a
pikey): *********
? Enter your Google AI (Gemini) service endpoint (see https://ai.google.dev/api/rest): generativelanguage
? Select default model: gemini-pro
Got context window 30720 for model gemini-pro (from Google API)
? Select your model's context window (see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning#gemini-model-versions): 30720
? Select embedding provider: openai
? Select default preset: memgpt_chat
? Select default persona: sam_pov
? Select default human: basic
? Select storage backend for archival data: chroma
? Select chroma backend: persistent
? Select storage backend for recall data: sqlite
📖 Saving config to /Users/loaner/.memgpt/config
llama-index-embeddings-huggingface
package and fix bug with local embeddings by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1222
/rethink
by @cpacker in https://github.com/cpacker/MemGPT/pull/1227
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.8...0.3.9
This release introduces initial support for running a MemGPT server with Docker Compose, and bugfixes for storing embeddings and message timestamps.
datetime.now()
to datetime.now(UTC)
by @cpacker in https://github.com/cpacker/MemGPT/pull/1176
num_passages
in Source.metadata_
from REST list sources endpoint by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1178
docker compose up
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1183
docker compose
server by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1189
tests.yaml
if env variable is set by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1196
compose.yaml
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1204
MemGPTConfig
URI for postgres if no environment variables by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1216
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.7...0.3.8
🦂 Bugfix release
GET /api/agents/{a_id}/messages
by @cpacker in https://github.com/cpacker/MemGPT/pull/1135
Admin
routes in client and add tests by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1157
Agent.step()
to fix out-of-order timestamps, (2) bug fixes with usage of preset/human
vs filename values by @cpacker in https://github.com/cpacker/MemGPT/pull/1145
send_message
POST by @cpacker in https://github.com/cpacker/MemGPT/pull/1161
list sources
route by @cpacker in https://github.com/cpacker/MemGPT/pull/1164
Preset
routes to API + patch for tool_call_id
max length OpenAI error by @cpacker in https://github.com/cpacker/MemGPT/pull/1165
source_id
to path variable by @cpacker in https://github.com/cpacker/MemGPT/pull/1171
0.3.7
by @cpacker in https://github.com/cpacker/MemGPT/pull/1173
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.6...0.3.7
🐜 bugfix release
send_message
) by @cpacker in https://github.com/cpacker/MemGPT/pull/1120
last_run
field to the agent state model by @cpacker in https://github.com/cpacker/MemGPT/pull/1124
GET /api/agents
response by @cpacker in https://github.com/cpacker/MemGPT/pull/1125
persona/human_name
fields to Preset
by @cpacker in https://github.com/cpacker/MemGPT/pull/1134
0.3.6
by @cpacker in https://github.com/cpacker/MemGPT/pull/1114
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.5...0.3.6
🦗 Bugfix release
server.server_llm_config
information for REST endpoint by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1083
0.3.5
+ add AutoGen integration tests by @cpacker in https://github.com/cpacker/MemGPT/pull/1081
test_metadata.py
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1082
autoflake
+ add autoflake
to dev extras by @cpacker in https://github.com/cpacker/MemGPT/pull/1097
TokenTextSplitter
for more reliable chunking by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1098
llama-index-embeddings-huggingface
for extras local
by @sarahwooders in https://github.com/cpacker/MemGPT/pull/1099
0.3.5
by @cpacker in https://github.com/cpacker/MemGPT/pull/1091
GET
REST API route for listing tools by @cpacker in https://github.com/cpacker/MemGPT/pull/1100
Full Changelog: https://github.com/cpacker/MemGPT/compare/0.3.4...0.3.5