Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs). With Inferflow, users can serve most of the common transformer models by simply modifying some lines in corresponding configuration files, without writing a single line of source code. Further details can be found in our technical report.
Below is a comparison between Inferflow and some other inference engines:
Inference Engine | New Model Support | Supported File Formats | Network Structures | Quantization Bits | Hybrid Parallelism for Multi-GPU Inference | Programming Languages |
---|---|---|---|---|---|---|
Huggingface Transformers | Adding/editing source codes | pickle (unsafe), safetensors | decoder-only, encoder-decoder, encoder-only | 4b, 8b | ✘ | Python |
vLLM | Adding/editing source codes | pickle (unsafe), safetensors | decoder-only | 4b, 8b | ✘ | Python |
TensorRT-LLM | Adding/editing source codes | decoder-only, encoder-decoder, encoder-only | 4b, 8b | ✘ | C++, Python | |
DeepSpeed-MII | Adding/editing source codes | pickle (unsafe), safetensors | decoder-only | - | ✘ | Python |
llama.cpp | Adding/editing source codes | gguf | decoder-only | 2b, 3b, 4b, 5b, 6b, 8b | ✘ | C/C++ |
llama2.c | Adding/editing source codes | llama2.c | decoder-only | - | ✘ | C |
LMDeploy | Adding/editing source codes | pickle (unsafe), TurboMind | decoder-only | 4b, 8b | ✘ | C++, Python |
Inferflow | Editing configuration files | pickle (safe), safetensors, gguf, llama2.c | decoder-only, encoder-decoder, encoder-only | 2b, 3b, 3.5b, 4b, 5b, 6b, 8b | ✔ | C++ |
Supported modules and technologies related to model definition:
Supported technologies and options related to serving:
Users can serve a model with Inferflow by editing a model specification file. We have built predefined specification files for some popular or representative models. Below is a list of such models.
Windows users: Please refer to docs/getting_started.win.md for the instructions about building and running the Inferflow tools and service on Windows.
The following instructions are for Linux, Mac, and WSL (Windows Subsystem for Linux).
git clone https://github.com/inferflow/inferflow
cd inferflow
Build the GPU version (that supports GPU/CPU hybrid inference):
mkdir build/gpu
cd build/gpu
cmake ../.. -DUSE_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES=75
make install -j 8
Build the CPU-only version:
mkdir build/cpu
cd build/cpu
cmake ../.. -DUSE_CUDA=0
make install -j 8
Upon a successful build, executables are generated and copied to
bin/release/
Example-1: Load a tiny model and perform inference
Step-1: Download the model
#> cd {inferflow-root-dir}/data/models/llama2.c/
#> bash download.sh
Instead of running the above batch script, you can also manually download the model files and copy them to the above folder. The source URL and file names can be found from download.sh.
Step-2: Run the llm_inference tool:
#> cd {inferflow-root-dir}/bin/
#> release/llm_inference llm_inference.tiny.ini
Please note that it is okay for llm_inference
and llm_inference.tiny.ini
not being in the same folder (llm_inference.tiny.ini is in bin/ and llm_inference is in bin/release/).
Example-2: Run the llm_inference tool to load a larger model for inference
Step-1: Edit configuration file bin/inferflow_service.ini to choose a model.
In the "transformer_engine" section of bin/inferflow_service.ini, there are multiple lines starting with "models =
" or ";models =
".
The lines starting with the ";" character are comments.
To choose a model for inference, please uncomment the line corresponding to this model, and comment the lines of other models.
By default, the phi-2 model is selected.
Please refer to docs/model_serving_config.md for more information about editing the configuration of inferflow_service.
Step-2: Download the selected model
#> cd {inferflow-root-dir}/data/models/{model-name}/
#> bash download.sh
Step-3: Edit configuration file bin/llm_inference.ini to choose or edit a query.
In the configuration file, queries are organized into query lists. A query list can contain one or multiple queries.
Different query lists are for different purposes. For example, query_list.decoder_only
is for testing decoder-only models. Its detailed information can be configured in the query_list.decoder_only
section.
The starting line of this section is "query_count = 1
", which means only one query is included in this query list.
Among the following lines with key query1
, only one line is uncommented and therefore effective, whereas other lines (i.e., the lines starting with a ";" character) are commented.
You can choose a query for testing by uncommenting this query and commenting all the other queries. You can, of course, add new queries or change the content of an existing query.
Step-4: Run the tool:
#> cd {inferflow-root-dir}/bin/
#> release/llm_inference
Step-1: Edit the service configuration file (bin/inferflow_service.ini)
Step-2: Start the service:
#> cd bin
#> release/inferflow_service
Run an HTTP client, to interact with the Inferflow service via the HTTP protocol to get inference results.
Option-1. Run the Inferflow client tool: inferflow_client
Step-1: Edit the configuration file (bin/inferflow_client.ini) to set the service address, query text, and options.
Step-2: Run the client tool to get inference results.
#> cd bin
#> release/inferflow_client
Option-2 The CURL command
You can also use the CURL command to send a HTTP POST request to the Inferflow service and get inference results. Below is an example:
curl -X POST -d '{"text": "Write an article about the weather of Seattle.", "res_prefix": "", "decoding_alg": "sample.top_p", "random_seed": 1, "temperature": 0.7, "is_streaming_mode": false}' localhost:8080
Option-3. Use GUI REST client (e.g., the Chrome extension of Tabbed Postman
).
URL: http://localhost:8080
(If you access the service from a different machine, please replace "localhost" with the service IP)
HTTP method: POST
Example body text: {"text": "Write an article about the weather of Seattle.", "res_prefix": "", "decoding_alg": "sample.top_p", "random_seed": 1, "temperature": 0.7, "is_streaming_mode": 0}
The Inferflow service also provides support for OpenAI's Chat Completions API. The API can be tested in one of the following ways.
Option-1: The OpenAI Python API Library
Below are the sample codes. Please first install the openai Python module (pip install openai) before running the following codes.
import openai
openai.base_url = "http://localhost:8080"
openai.api_key = "sk-no-key-required"
is_streaming = True
response = openai.chat.completions.create(
model="default",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write an article about the weather of Seattle."}
],
stream = is_streaming
)
if is_streaming:
for chunk in response:
print(chunk.choices[0].delta.content or "", end = "")
else:
print(response.choices[0].message.content)
Option-2: The CURL command
curl -X post -d '{"model": "gpt-3.5-turbo","messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Write an article about the weather of Seattle."}], "stream": true}' http://localhost:8080/chat/completions
If you are interested in our work, please kindly cite:
@misc{shi2024inferflow,
title={Inferflow: an Efficient and Highly Configurable Inference Engine for Large Language Models},
author={Shuming Shi and Enbo Zhao and Deng Cai and Leyang Cui and Xinting Huang and Huayang Li},
year={2024},
eprint={2401.08294},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Inferflow is inspired by the awesome projects of llama.cpp and llama2.c. The CPU inference part of Inferflow is based on the ggml library. The FP16 data type in the CPU-only version of Inferflow is from the Half-precision floating-point library. We express our sincere gratitude to the maintainers and implementers of these source codes and tools.