FasterTransformer Versions Save

Transformer related optimization, including BERT, GPT

release/v5.3_tag

1 year ago

release/v5.2.1_tag

1 year ago

Fix some bugs of v5.2

release/v5.2_bug_fix_tag

1 year ago

release/v5.2_tag

1 year ago

release/v5.1.1_tag

1 year ago
  1. fix stop criterion.
  2. fix bug of attention mask chosen when enabling shared context opt
  3. fix swin qk scale
  4. fix bug of repetition penalty of t5 under beam search
  5. fix bug of gpt_guide.md
  6. fix bug of decoder_masked_multihead_attention_template

release/v5.1_bugfix_tag

1 year ago

Fix the bug of model parallelism setting of T5 on v5.1

release/v5.1_tag

1 year ago

release/v5.0_tag

2 years ago

release/v3.1_tag

2 years ago

release/v4.0_tag

3 years ago