Build Production-Grade AI Applications
BentoML is a framework for building reliable, scalable, and cost-efficient AI applications. It comes with everything you need for model serving, application packaging, and production deployment.
👉 Join our Slack community!pip install bentoml
This example demonstrates how to serve and deploy a simple text summarization application.
Install dependencies:
pip install torch transformers "bentoml>=1.2.0a0"
Define the serving logic of your model in a service.py
file.
from __future__ import annotations
import bentoml
from transformers import pipeline
@bentoml.service(
resources={"cpu": "2"},
traffic={"timeout": 10},
)
class Summarization:
def __init__(self) -> None:
# Load model into pipeline
self.pipeline = pipeline('summarization')
@bentoml.api
def summarize(self, text: str) -> str:
result = self.pipeline(text)
return result[0]['summary_text']
Run this BentoML Service locally, which is accessible at http://localhost:3000.
bentoml serve service:Summarization
Send a request to summarize a short news paragraph:
curl -X 'POST' \
'http://localhost:3000/summarize' \
-H 'accept: text/plain' \
-H 'Content-Type: application/json' \
-d '{
"text": "Breaking News: In an astonishing turn of events, the small town of Willow Creek has been taken by storm as local resident Jerry Thompson'\''s cat, Whiskers, performed what witnesses are calling a '\''miraculous and gravity-defying leap.'\'' Eyewitnesses report that Whiskers, an otherwise unremarkable tabby cat, jumped a record-breaking 20 feet into the air to catch a fly. The event, which took place in Thompson'\''s backyard, is now being investigated by scientists for potential breaches in the laws of physics. Local authorities are considering a town festival to celebrate what is being hailed as '\''The Leap of the Century."
}'
After your Service is ready, you can deploy it to BentoCloud or as a Docker image.
First, create a bentofile.yaml
file for building a Bento.
service: "service:Summarization"
labels:
owner: bentoml-team
project: gallery
include:
- "*.py"
python:
packages:
- torch
- transformers
Then, choose one of the following ways for deployment:
Make sure you have logged in to BentoCloud and then run the following command:
bentoml deploy .
Build a Bento to package necessary dependencies and components into a standard distribution format.
bentoml build
Containerize the Bento.
bentoml containerize summarization:latest
Run this image with Docker.
docker run --rm -p 3000:3000 summarization:latest
For detailed explanations, read Quickstart.
BentoML supports billions of model runs per day and is used by thousands of organizations around the globe.
Join our Community Slack 💬, where thousands of AI application developers contribute to the project and help each other.
To report a bug or suggest a feature request, use GitHub Issues.
There are many ways to contribute to the project:
#bentoml-contributors
channel here.Thanks to all of our amazing contributors!
BentoML collects usage data that helps our team to improve the product. Only
BentoML's internal API calls are being reported. We strip out as much
potentially sensitive information as possible, and we will never collect user
code, model data, model names, or stack traces. Here's the
code for usage
tracking. You can opt-out of usage tracking by the --do-not-track
CLI option:
bentoml [command] --do-not-track
Or by setting environment variable BENTOML_DO_NOT_TRACK=True
:
export BENTOML_DO_NOT_TRACK=True
If you use BentoML in your research, please cite using the following citation:
@software{Yang_BentoML_The_framework,
author = {Yang, Chaoyu and Sheng, Sean and Pham, Aaron and Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},
license = {Apache-2.0},
title = {{BentoML: The framework for building reliable, scalable and cost-efficient AI application}},
url = {https://github.com/bentoml/bentoml}
}