This repository provides the latest pretrained language models and its related optimization techniques developed by Huawei Noah's Ark Lab.
Directory structure
PanGu-α is a Large-scale autoregressive pretrained Chinese language model with up to 200B parameter. The models are developed under the MindSpore and trained on a cluster of Ascend 910 AI processors.
NEZHA-TensorFlow is a pretrained Chinese language model which achieves the state-of-the-art performances on several Chinese NLP tasks developed under TensorFlow.
DynaBERT is a dynamic BERT model with adaptive width and depth.
BBPE provides a byte-level vocabulary building tool and its correspoinding tokenizer.
PMLM is a probabilistically masked language model. Trained without the complex two-stream self-attention, PMLM can be treated as a simple approximation of XLNet.
TernaryBERT is a weights ternarization method for BERT model developed under PyTorch.
HyperText is an efficient text classification model based on hyperbolic geometry theories.
BinaryBERT is a weights binarization method using ternary weight splitting for BERT model, developed under PyTorch.
AutoTinyBERT provides a model zoo that can meet different latency requirements.
PanGu-Bot is a Chinese pre-trained open-domain dialog model build based on the GPU implementation of PanGu-α.
CeMAT is a universal sequence-to-sequence multi-lingual pre-training language model for both autoregressive and non-autoregressive neural machine translation tasks.
Noah_WuKong is a large-scale Chinese vision-language dataset and a group of benchmarking models trained on it.