Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate, LanceDB).
We're excited to announce the addition of observability features on our hosted platform. It allows your teams to monitor and evaluate your production usages of LLMs with just one line of code change!
import prompttools.logger
The new features are integrated with our open-source library as well as the PromptTools playground. Our goal is to enable reliable deployments of LLM usages more quickly and observes any issues in real-time.
If you are interested to try out platform, please reach out to us.
We remain committed to expanding this open source library. We look forward to build more development tools that enable you to iterate faster with AI models. Please have a look at our open issues to what features are coming.
openai
version 1.0+If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
Full Changelog: https://github.com/hegelai/prompttools/compare/v0.0.41...v0.0.45
We're excited to announce the private beta of PromptTools Playground! It is a hosted platform integrated with our open-source library. It persists your experiments with version control and provides collaboration features suited for teams.
If you are interested to try out platform, please reach out to us. We remain committed to expanding this open source library. We look forward to build more development tools that enable you to iterate faster with AI models.
run_one
and run_partial
for `OpenAIChatExperiment
save_experiment
load_experiment
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
chunk_text
autoeval_with_documents
structural_similarity
Shout out to @HashemAlsaket, @bweber-rebellion, @imalsky, @kacperlukawski for actively participating and contributing new features!
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
If you are interested in a hosted version of prompttools
with more features for your team, please reach out.
OpenAIChatExperiment
- it can now call functions.There are also many fixes and improvements we made to different experiments. Notably, we refactored how evaluate
works. In this version, the evaluation function being passed into experiment.evaluate()
should handle a row of data plus other optional keyword arguments. Please see our updated example notebooks as references.
The playground now supports shareable links. You can use the Share
button to create a link and share your experiment setup with your teammates.
Shout out to @HashemAlsaket, @AyushExel, @pramitbhatia25 @mmmaia actively participating and contributing new features!
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
Major features added recently:
If you would like to execute your experiments in a StreamLit UI rather than in a notebook, you can do that with:
pip install prompttools
git clone https://github.com/hegelai/prompttools.git
cd prompttools && streamlit run prompttools/playground/playground.py
Shout out to @HashemAlsaket actively participating and contributing new features!
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.