Large Language Model Training Playbook Save

An open collection of implementation tips, tricks and resources for training large language models

Project README

📖 The Large Language Model Training Playbook

This playbook is a companion to the LLM Training Handbook which contains a lot more details and scripts.

An open collection of implementation tips, tricks and resources for training large language models.

The following covers questions related to various topics which are interesting or challenging when training large language models.

Deciding on a model architecture

Deciding on a model parallelism strategy

Deciding on the model size

Scaling laws

Trade-off of large language model sizes

What to chose between fp32, fp16, bf16

Mixed-precisions for optimizers, weights, specifics modules

How to finetune and integrate a model trained in a precision in another precision

Selecting training hyper-parameters and model initializations

Learning rate and learning rate schedules

Questions on batch size

Maximizing throughput

Avoiding, recovering from and understanding instabilities

Detecting instabilities early

Training tips to reduce instabilities

Issues with data and data processing

Debugging software and hardware failures

Tips on what metrics to follow during the training

Resources

Open Source Agenda is not affiliated with "Large Language Model Training Playbook" Project. README Source: huggingface/large_language_model_training_playbook

Open Source Agenda Badge

Open Source Agenda Rating