Basaran Save Abandoned

Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.

Project README

Basaran

Python codecov PyPI Status

Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.

The open source community will eventually witness the Stable Diffusion moment for large language models (LLMs), and Basaran allows you to replace OpenAI's service with the latest open-source model to power your application without modifying a single line of code.

The key features of Basaran are:

  • Streaming generation using various decoding strategies.
  • Support for both decoder-only and encoder-decoder models.
  • Detokenizer that handles surrogates and whitespace.
  • Multi-GPU support with optional quantization.
  • Real-time partial progress using server-sent events.
  • Compatibility with OpenAI API and client libraries.
  • Comes with a fancy web-based playground!

Quick Start

TL;DR

Replace user/repo with your selected model and X.Y.Z with the latest version, then run:

docker run -p 80:80 -e MODEL=user/repo hyperonym/basaran:X.Y.Z

And you're good to go! ๐Ÿš€

Playground: http://127.0.0.1/
API:        http://127.0.0.1/v1/completions

Installation

Docker images are available on Docker Hub and GitHub Packages.

For GPU acceleration, you also need to install the NVIDIA Driver and NVIDIA Container Runtime. Basaran's image already comes with related libraries such as CUDA and cuDNN, so there is no need to install them manually.

Basaran's image can be used in three ways:

  • Run directly: By specifying the MODEL="user/repo" environment variable, the corresponding model can be downloaded from Hugging Face Hub during the first startup.
  • Bundling: Create a new Dockerfile to preload a public model or bundle a private model.
  • Bind mount: Mount a model from the local file system into the container and point the MODEL environment variable to the corresponding path.

For the above use cases, you can find sample Dockerfiles and docker-compose files in the deployments directory.

Using pip

Basaran is tested on Python 3.8+ and PyTorch 1.13+. You should create a virtual environment with the version of Python you want to use, and activate it before proceeding.

  1. Install with pip:
pip install basaran
  1. Install dependencies required for GPU acceleration (optional):
pip install accelerate bitsandbytes
  1. Replace user/repo with the selected model and run Basaran:
MODEL=user/repo PORT=80 python -m basaran

For a complete list of environment variables, see __init__.py.

Running From Source

If you want to access the latest features or hack it yourself, you can choose to run from source using git.

  1. Clone the repository:
git clone https://github.com/hyperonym/basaran.git && cd basaran
  1. Install dependencies:
pip install -r requirements.txt
  1. Replace user/repo with the selected model and run Basaran:
MODEL=user/repo PORT=80 python -m basaran

Basic Usage

cURL

Basaran's HTTP request and response formats are consistent with the OpenAI API.

Taking text completion as an example:

curl http://127.0.0.1/v1/completions \
    -H 'Content-Type: application/json' \
    -d '{ "prompt": "once upon a time,", "echo": true }'
Example response
{
    "id": "cmpl-e08c701b4ba032c09ef080e1",
    "object": "text_completion",
    "created": 1678003509,
    "model": "bigscience/bloomz-560m",
    "choices": [
        {
            "text": "once upon a time, the human being faces a complicated situation and he needs to find a new life.",
            "index": 0,
            "logprobs": null,
            "finish_reason": "length"
        }
    ],
    "usage": {
        "prompt_tokens": 5,
        "completion_tokens": 21,
        "total_tokens": 26
    }
}

OpenAI Client Library

If your application uses client libraries provided by OpenAI, you only need to modify the OPENAI_API_BASE environment variable to match Basaran's endpoint:

OPENAI_API_BASE="http://127.0.0.1/v1" python your_app.py

The examples directory contains examples of using the OpenAI Python library.

Using as a Python Library

Basaran is also available as a library on PyPI. It can be used directly in Python without the need to start a separate API server.

  1. Install with pip:
pip install basaran
  1. Use the load_model function to load a model:
from basaran.model import load_model

model = load_model("user/repo")
  1. Generate streaming output by calling the model:
for choice in model("once upon a time"):
    print(choice)

The examples directory contains examples of using Basaran as a library.

Compatibility

Basaran's API format is consistent with OpenAI's, with differences in compatibility mainly in terms of parameter support and response fields. The following sections provide detailed information on the compatibility of each endpoint.

Models

Each Basaran process serves only one model, so the result will only contain that model.

Completions

Although Basaran does not support the model parameter, the OpenAI client library requires it to be present. Therefore, you can enter any random model name.

Parameter Basaran OpenAI Default Value Maximum Value
model โ—‹ โ— - -
prompt โ— โ— "" COMPLETION_MAX_PROMPT
suffix โ—‹ โ— - -
min_tokens โ— โ—‹ 0 COMPLETION_MAX_TOKENS
max_tokens โ— โ— 16 COMPLETION_MAX_TOKENS
temperature โ— โ— 1.0 -
top_p โ— โ— 1.0 -
n โ— โ— 1 COMPLETION_MAX_N
stream โ— โ— false -
logprobs โ— โ— 0 COMPLETION_MAX_LOGPROBS
echo โ— โ— false -
stop โ—‹ โ— - -
presence_penalty โ—‹ โ— - -
frequency_penalty โ—‹ โ— - -
best_of โ—‹ โ— - -
logit_bias โ—‹ โ— - -
user โ—‹ โ— - -

Chat

Providing a unified chat API is currently difficult because each model has a different format for chat history.

Therefore, it is recommended to pre-format the chat history based on the requirements of the specific model and use it as the prompt for the completion API.

GPT-NeoXT-Chat-Base-20B

**Summarize a long document into a single sentence and ...**

<human>: Last year, the travel industry saw a big ...

<bot>: If you're traveling this spring break, ...

<human>: But ...

<bot>:

chatglm-6b

[Round 0]
้—ฎ๏ผšไฝ ๅฅฝ
็ญ”๏ผšไฝ ๅฅฝ!ๆœ‰ไป€ไนˆๆˆ‘ๅฏไปฅๅธฎๅŠฉไฝ ็š„ๅ—?
[Round 1]
้—ฎ๏ผšไฝ ๆ˜ฏ่ฐ๏ผŸ
็ญ”๏ผš

Roadmap

  • API
    • Models
      • List models
      • Retrieve model
    • Completions
      • Create completion
    • Chat
      • Create chat completion
  • Model
    • Architectures
      • Encoder-decoder
      • Decoder-only
    • Decoding strategies
      • Random sampling with temperature
      • Nucleus-sampling (top-p)
      • Stop sequences
      • Presence and frequency penalties

See the open issues for a full list of proposed features.

Contributing

This project is open-source. If you have any ideas or questions, please feel free to reach out by creating an issue!

Contributions are greatly appreciated, please refer to CONTRIBUTING.md for more information.

License

Basaran is available under the MIT License.


ยฉ 2023 Hyperonym

Open Source Agenda is not affiliated with "Basaran" Project. README Source: hyperonym/basaran

Open Source Agenda Badge

Open Source Agenda Rating