Easy and Efficient Finetuning of QLoRA LLMs. (Supported LLama, LLama2, bloom, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
👋🤗🤗👋 Join our WeChat.
中文 | English
This is the repo for the Efficient Finetuning of Quantized LLMs
project, which aims to build and share instruction-following Chinese baichuan-7b/LLaMA/Pythia/GLM
model tuning methods which can be trained on a single Nvidia RTX-2080TI, multi-round chatbot which can be trained on a single Nvidia RTX-3090 with the context len 2048.
We uses bitsandbytes for quantization and is integrated with Huggingface's PEFT and transformers libraries.
--model_name_or_path Llama-2-7b-hf
argument to use the LLaMA-2 model.--model_name_or_path path_to_baichuan_model
and --lora_target W_pack
arguments to train the Baichuan-13B model.--model_name_or_path tiiuae/falcon-7b
and --lora_target query_key_value
arguments to use the Falcon model.--model_name_or_path baichuan-inc/baichuan-7B
to use the baichuan-7B model.scripts/qlora_finetune/finetune_llama_guanaco7b.sh
and set --bits 4/8
argument to work with quantized model.scripts/lora_finetune/lora-finetune_alpaca.sh
to finetune the LLAMA model with Lora on the Alpaca dataset.scripts/full_finetune/full-finetune_alpaca.sh
to full finetune the LLAMA model on the Alpaca dataset.As of now, we support the following datasets, most of which are all available in the Hugging Face datasets library.
For supervised fine-tuning:
For reward model training:
Please refer to data/README.md to learn how to use these datasets. If you want to explore more datasets, please refer to the awesome-instruction-datasets. As default, we use the Standford Alpaca dataset for training and evaluation.
Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
pip install --upgrade huggingface_hub
huggingface-cli login
We provide a number of data preprocessing tools in the data folder. These tools are intended to be a starting point for further research and development.
We provide a number of models in the Hugging Face model hub. These models are trained with QLoRA and can be used for inference and finetuning. We provide the following models:
Base Model | Adapter | Instruct Datasets | Train Script | Log | Model on Huggingface |
---|---|---|---|---|---|
llama-7b | FullFinetune | - | - | - | |
llama-7b | QLoRA | openassistant-guanaco | finetune_lamma7b | wandb log | GaussianTech/llama-7b-sft |
llama-7b | QLoRA | OL-CC | finetune_lamma7b | ||
baichuan7b | QLoRA | openassistant-guanaco | finetune_baichuan7b | wandb log | GaussianTech/baichuan-7b-sft |
baichuan7b | QLoRA | OL-CC | finetune_baichuan7b | wandb log | - |
CUDA >= 11.0
Python 3.8+ and PyTorch 1.13.1+
🤗Transformers, Datasets, Accelerate, PEFT and bitsandbytes
jieba, rouge_chinese and nltk (used at evaluation)
gradio (used in gradio_webserver.py)
To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source and make sure you have the latest version of the bitsandbytes library (0.39.0). You can achieve the above with the following commands:
pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git
Clone this repository and navigate to the Efficient-Tuning-LLMs folder
git clone https://github.com/jianzhnie/Efficient-Tuning-LLMs.git
cd Efficient-Tuning-LLMs
main function | Useage | Scripts |
---|---|---|
train.py | Full finetune LLMs on SFT datasets | full_finetune |
train_lora.py | Finetune LLMs by using Lora (Low-Rank Adaptation of Large Language Models finetune) | lora_finetune |
train_qlora.py | Finetune LLMs by using QLora (QLoRA: Efficient Finetuning of Quantized LLMs) | qlora_finetune |
The train_qlora.py
code is a starting point for finetuning and inference on various datasets.
Basic command for finetuning a baseline model on the Alpaca dataset:
python train_qlora.py --model_name_or_path <path_or_name>
For models larger than 13B, we recommend adjusting the learning rate:
python train_qlora.py –learning_rate 0.0001 --model_name_or_path <path_or_name>
We can also tweak our hyperparameters:
python train_qlora.py \
--model_name_or_path ~/checkpoints/baichuan7b \
--dataset_cfg ./data/alpaca_zh_pcyn.yaml \
--output_dir ./work_dir/oasst1-baichuan-7b \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy steps \
--eval_steps 50 \
--save_strategy steps \
--save_total_limit 5 \
--save_steps 100 \
--logging_strategy steps \
--logging_steps 1 \
--learning_rate 0.0002 \
--warmup_ratio 0.03 \
--weight_decay 0.0 \
--lr_scheduler_type constant \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--max_new_tokens 32 \
--source_max_len 512 \
--target_max_len 512 \
--lora_r 64 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--double_quant \
--quant_type nf4 \
--fp16 \
--bits 4 \
--gradient_checkpointing \
--trust_remote_code \
--do_train \
--do_eval \
--sample_generate \
--data_seed 42 \
--seed 0
To find more scripts for finetuning and inference, please refer to the scripts
folder.
Quantization parameters are controlled from the BitsandbytesConfig
(see HF documenation) as follows:
load_in_4bit
bnb_4bit_compute_dtype
bnb_4bit_use_double_quant
bnb_4bit_quant_type
. Note that there are two supported quantization datatypes fp4
(four bit float) and nf4
(normal four bit float). The latter is theoretically optimal for normally distributed weights and we recommend using nf4
. model = AutoModelForCausalLM.from_pretrained(
model_name_or_path='/name/or/path/to/your/model',
load_in_4bit=True,
device_map='auto',
max_memory=max_memory,
torch_dtype=torch.bfloat16,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
We provide two Google Colab notebooks to demonstrate the use of 4bit models in inference and fine-tuning. These notebooks are intended to be a starting point for further research and development.
Other examples are found under the examples/ folder.
You can specify the path to your dataset using the --dataset argument. If the --dataset_format argument is not set, it will default to the Alpaca format. Here are a few examples:
python train_qlora.py --dataset="path/to/your/dataset"
python train_qlora.py --dataset="path/to/your/dataset" --dataset_format="self-instruct"
Multi GPU training and inference work out-of-the-box with Hugging Face's Accelerate. Note that the per_device_train_batch_size and per_device_eval_batch_size arguments are global batch sizes unlike what their name suggest.
When loading a model for training or inference on multiple GPUs you should pass something like the following to AutoModelForCausalLM.from_pretrained():
device_map = "auto"
max_memory = {i: '46000MB' for i in range(torch.cuda.device_count())}
运行下面的脚本,程序会在命令行中和你的ChatBot进行交互式的对话,在命令行中输入指示并回车即可生成回复,输入 clear
可以清空对话历史,输入 stop
终止程序。
python cli_demo.py \
--model_name_or_path ~/checkpoints/baichuan7b \ # base model
--checkpoint_dir ./work_dir/checkpoint-700 \ # 训练的模型权重
--trust_remote_code \
--double_quant \
--quant_type nf4 \
--fp16 \
--bits 4
This file reads the foundation model from the Hugging Face model hub and the LoRA weights from path/to/your/model_dir
, and runs a Gradio interface for inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.
Example usage:
python gradio_webserver.py \
--model_name_or_path decapoda-research/llama-7b-hf \
--lora_model_name_or_path `path/to/your/model_dir`
We provide generations for the models described in the paper for both OA and Vicuna queries in the eval/generations
folder. These are intended to foster further research on model evaluation and analysis.
Can you distinguish ChatGPT from Guanaco? Give it a try! You can access the model response Colab here comparing ChatGPT and Guanaco 65B on Vicuna prompts.
Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the problem.
bnb_4bit_compute_type='fp16'
can lead to instabilities. For 7B LLaMA, only 80% of finetuning runs complete without error. We have solutions, but they are not integrated yet into bitsandbytes.tokenizer.bos_token_id = 1
to avoid generation issues.Efficient Finetuning of Quantized LLMs
is released under the Apache 2.0 license.
We thank the Huggingface team, in particular Younes Belkada, for their support integrating QLoRA with PEFT and transformers libraries.
We appreciate the work by many open-source contributors, especially:
Please cite the repo if you use the data or code in this repo.
@misc{Chinese-Guanaco,
author = {jianzhnie},
title = {Chinese-Guanaco: Efficient Finetuning of Quantized LLMs for Chinese},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/jianzhnie/Efficient-Tuning-LLMs}},
}