MobileBert PyTorch Save

MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices

Project README

MobileBERT_pytorch

This repository contains a PyTorch implementation of the MobileBERT model from the paper

MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices

by Zhiqing Sun1∗, Hongkun Yu2, Xiaodan Song....

Dependencies

  • pytorch=1.10
  • cuda=9.0
  • cudnn=7.5
  • scikit-learn
  • sentencepiece
  • tokenizers

Download Pre-trained Models of English

Official download links: google mobilebert

Fine-tuning

1. Place config.json and vocab.txt into the prev_trained_model/mobilebert directory. example:

├── prev_trained_model
|  └── mobilebert
|  |  └── pytorch_model.bin
|  |  └── config.json
|  |  └── vocab.txt

2.convert mobilebert tf checkpoint to pytorch

python convert_mobilebert_tf_checkpoint_to_pytorch.py \
    --tf_checkpoint_path=./prev_trained_model/mobilebert \
    --mobilebert_config_file=./prev_trained_model/mobilebert/config.json \
    --pytorch_dump_path=./prev_trained_model/mobilebert/pytorch_model.bin

The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.

Before running anyone of these GLUE tasks you should download the GLUE data by running this script and unpack it to some directory $DATA_DIR.

3.run sh scripts/run_classifier_sst2.shto fine tuning mobilebert model

Result

Performance of MobileBert on GLUE benchmark results using a single-model setup on dev:

Cola Sst-2 Sts-b
metric matthews_corrcoef accuracy pearson
mobilebert 0.5837 0.922 0.8839
Open Source Agenda is not affiliated with "MobileBert PyTorch" Project. README Source: lonePatient/MobileBert_PyTorch
Stars
61
Open Issues
3
Last Commit
3 years ago
License
MIT

Open Source Agenda Badge

Open Source Agenda Rating