VeScale Save

A PyTorch Native LLM Training Framework

Project README

A PyTorch Native LLM Training Framework

An Industrial-Level Framework for Easy-of-Use

  • 🔥 PyTorch Native: veScale is rooted in PyTorch-native data structures, operators, and APIs, enjoying the ecosystem of PyTorch that dominates the ML world.

  • 🛡 Zero Model Code Change: veScale decouples distributed system design from model architecture, requiring near-zero or zero modification on the model code of users.

  • 🚀 Single Device Abstraction: veScale provides single-device semantics to users, automatically distributing and orchestrating model execution in a cluster of devices.

  • 🎯 Automatic Parallelism Planning: veScale parallelizes model execution with a synergy of strategies (tensor, sequence, data, ZeRO, pipeline parallelism) under semi- or full-automation [coming soon].

  • âš¡ Eager & Compile Mode: veScale supports not only Eager-mode automation for parallel training and inference but also Compile-mode for ultimate performance [coming soon].

  • 📀 Automatic Checkpoint Resharding: veScale manages distributed checkpoints automatically with online resharding across different cluster sizes and different parallelism strategies.

Coming Soon

veScale is still in its early phase. We are refactoring our internal LLM training system components to meet open source standard. The tentative timeline is as follows:

  • by end of May, fast checkpointing system

  • by end of July, CUDA event monitor, pipeline parallelism and supporting components for large-scale training

Table of Content (web view)

Introduction

Quick Start

DTensor

Parallel

Plan

Checkpoint

We Are Hiring!

License

The veScale Project is under the Apache License v2.0.

Open Source Agenda is not affiliated with "VeScale" Project. README Source: volcengine/veScale
Stars
363
Open Issues
0
Last Commit
3 weeks ago
Repository
License

Open Source Agenda Badge

Open Source Agenda Rating