Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate, LanceDB).
:wrench: Test and experiment with prompts, LLMs, and vector databases. :hammer:
Welcome to prompttools
created by Hegel AI! This repo offers a set of open-source, self-hostable tools for experimenting with, testing, and evaluating LLMs, vector databases, and prompts. The core idea is to enable developers to evaluate using familiar interfaces like code, notebooks, and a local playground.
In just a few lines of code, you can test your prompts and parameters across different models (whether you are using OpenAI, Anthropic, or LLaMA models). You can even evaluate the retrieval accuracy of vector databases.
from prompttools.experiment import OpenAIChatExperiment
messages = [
[{"role": "user", "content": "Tell me a joke."},],
[{"role": "user", "content": "Is 17077 a prime number?"},],
]
models = ["gpt-3.5-turbo", "gpt-4"]
temperatures = [0.0]
openai_experiment = OpenAIChatExperiment(models, messages, temperature=temperatures)
openai_experiment.run()
openai_experiment.visualize()
To stay in touch with us about issues and future updates, join the Discord.
To install prompttools
, you can use pip
:
pip install prompttools
You can run a simple example of a prompttools
locally with the following
git clone https://github.com/hegelai/prompttools.git
cd prompttools && jupyter notebook examples/notebooks/OpenAIChatExperiment.ipynb
You can also run the notebook in Google Colab
If you want to interact with prompttools
using our playground interface, you can launch it with the following commands.
You can run a simple example of a prompttools
locally with the following
pip install notebook # If jupyter notebook has not been installed
pip install prompttools
Then, clone the git repo and launch the streamlit app:
git clone https://github.com/hegelai/prompttools.git
cd prompttools && streamlit run prompttools/playground/playground.py
You can also access a hosted version of the playground on the Streamlit Community Cloud.
Note: The hosted version does not support LlamaCpp
Our documentation website contains the full API reference and more description of individual components. Check it out!
Here is a list of APIs that we support with our experiments:
LLMs
Vector Databases and Data Utility
Frameworks
Computer Vision
If you have any API that you'd like to see being supported soon, please open an issue or a PR to add it. Feel free to discuss in our Discord channel as well.
Will this library forward my LLM calls to a server before sending it to OpenAI, Anthropic, and etc.?
Does prompttools
store my API keys or LLM inputs and outputs to a server?
How do I persist my results?
Experiment
with the methods to_csv
,
to_json
, to_lora_json
, or to_mongo_db
. We are building more persistence features and we will be happy to further discuss your use cases, pain points, and what export
options may be useful for you.Since we are changing our API rapidly, there are some errors caused by our negligence or out of date documentation. To improve user experience, we collect data from normal package usage that helps us understand the errors that are raised. This data is collected and sent to Sentry, a third-party error tracking service, commonly used in open-source softwares. It only logs this library's own actions.
You can easily opt-out by defining an environment variable called SENTRY_OPT_OUT
.
We welcome PRs and suggestions! Don't hesitate to open a PR/issue or to reach out to us via email. Please have a look at our contribution guide and "Help Wanted" issues to get started!
We will be delighted to work with early adopters to shape our designs. Please reach out to us via email if you're interested in using this tooling for your project or have any feedback.
We will be gradually releasing more components to the open-source community. The current license can be found in the LICENSE file. If there is any concern, please contact us and we will be happy to work with you.