Hpcaitech FastFold Versions Save

Optimizing AlphaFold Training and Inference on GPU Clusters

0.2.0

1 year ago

Overview

Hi, here is FastFold v0.2.0. Compared to the previous version, now we inference faster and with less memory and support multimer inference.

What's Changed

  1. Save up to 75% GPU memory, able to inference sequence containing more than 10000 residues in bf16.
  2. Better softmax and layernorm kernel optimization based on Triton, at least 25% faster than previous version.
  3. Faster data processing, about 3 times faster on monomer, about 3N times faster on multimer with N sequence.
  4. Support multimer inference.

Installation

  1. Build from source
  2. Conda (recommended)
  3. Pypi
  4. Docker

Have a nice trip!

0.1.0

1 year ago

Overview

Hi, here is FastFold v0.1.0. Compared to the previous version, now you can process the data much faster. Besides, you can use pip install to install FastFold now. If you want to use docker, there is one for you.

Features we have

  1. Excellent kernel performance on GPU platform
  2. Supporting Dynamic Axial Parallelism(DAP)
    • Break the memory limit of single GPU and reduce the overall training time
    • DAP can significantly speed up inference and make ultra-long sequence inference possible
  3. Ease of use
    • Huge performance gains with a few lines changes
    • You don't need to care about how the parallel part is implemented
  4. Faster data processing, about 3x times faster than the original way

Installation ways for you

  1. Build from source
  2. Conda (recommended)
  3. Pypi
  4. Docker

Have a nice trip!

0.1.0-beta

2 years ago