An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
dataloader_num_workers=4
for llava training by @LZHgrla in https://github.com/InternLM/xtuner/pull/611
convert_xtuner_weights_to_hf
with frozen ViT by @LZHgrla in https://github.com/InternLM/xtuner/pull/661
safe_serialization
saving by @LZHgrla in https://github.com/InternLM/xtuner/pull/648
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.18...v0.1.19
default_collate_fn
by @pppppM in https://github.com/InternLM/xtuner/pull/567
split_list
to support value
at the beginning by @LZHgrla in https://github.com/InternLM/xtuner/pull/568
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.17...v0.1.18
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.16...v0.1.17
generation_kwargs
for EvaluateChatHook
by @LZHgrla in https://github.com/InternLM/xtuner/pull/501
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.15...v0.1.16
encoding='utf-8'
by @LZHgrla in https://github.com/InternLM/xtuner/pull/477
msagent_react_map_fn
error by @LZHgrla in https://github.com/InternLM/xtuner/pull/470
xtuner/configs/llava/
configs by @LZHgrla in https://github.com/InternLM/xtuner/pull/483
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.14...v0.1.15
TrainLoop
by @LZHgrla in https://github.com/InternLM/xtuner/pull/348
--repetition-penalty
for xtuner chat
by @LZHgrla in https://github.com/InternLM/xtuner/pull/351
KeyError
of encode_fn
by @LZHgrla in https://github.com/InternLM/xtuner/pull/361
batch_size
of full fine-tuing LLaVA-InternLM2 by @LZHgrla in https://github.com/InternLM/xtuner/pull/360
system
for alpaca_map_fn
by @LZHgrla in https://github.com/InternLM/xtuner/pull/363
DEFAULT_IMAGE_TOKEN
instead of '<image>'
by @LZHgrla in https://github.com/InternLM/xtuner/pull/353
attention_mask
for default_collate_fn
by @LZHgrla in https://github.com/InternLM/xtuner/pull/371
colors_map_fn
to DATASET_FORMAT_MAPPING
and rename 'internlm_repo' to 'intern_repo' by @HIT-cwh in https://github.com/InternLM/xtuner/pull/372
intern_repo_dataset.md
by @LZHgrla in https://github.com/InternLM/xtuner/pull/384
apply_rotary_pos_emb
by @LZHgrla in https://github.com/InternLM/xtuner/pull/383
system
of alpaca_zh_map_fn
by @LZHgrla in https://github.com/InternLM/xtuner/pull/395
Qwen1.5
by @LZHgrla in https://github.com/InternLM/xtuner/pull/407
--system-prompt
to --system-template
by @LZHgrla in https://github.com/InternLM/xtuner/pull/406
output_with_loss
for dataset process by @LZHgrla in https://github.com/InternLM/xtuner/pull/408
Gemma
by @PommesPeter in https://github.com/InternLM/xtuner/pull/429
LengthGroupedSampler
by @LZHgrla in https://github.com/InternLM/xtuner/pull/436
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.13...v0.1.14
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.12...v0.1.13
ConcatDataset
by @LZHgrla in https://github.com/InternLM/xtuner/pull/298
prompt_template
by @LZHgrla in https://github.com/InternLM/xtuner/pull/294
stop_words
by @LZHgrla in https://github.com/InternLM/xtuner/pull/313
torch.optim.AdamW
as the default optimizer by @LZHgrla in https://github.com/InternLM/xtuner/pull/318
pth_to_hf
for LLaVA model by @LZHgrla in https://github.com/InternLM/xtuner/pull/316
demo_data
examples by @LZHgrla in https://github.com/InternLM/xtuner/pull/278
xtuner xxx
by @pppppM in https://github.com/InternLM/xtuner/pull/307
>=3.8, <3.11
by @LZHgrla in https://github.com/InternLM/xtuner/pull/327
trust_remote_code=True
for AutoModel by @LZHgrla in https://github.com/InternLM/xtuner/pull/328
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.11...v0.1.12
xtuner train
by @LZHgrla in https://github.com/InternLM/xtuner/pull/272
warmup
for all configs by @LZHgrla in https://github.com/InternLM/xtuner/pull/274
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.10...v0.1.11
wizardcoder
template by @xiaohangguo in https://github.com/InternLM/xtuner/pull/243
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.9...v0.1.10