Decomposable Attention Save

TensorFlow implementation of the Decomposable Attention model (A Decomposable Attention Model for Natural Language Inference)

Project README

Decomposable-Attention

A Tensorflow implementation of Parikh's A Decomposable Attention Model for Natural Language Inference from EMNLP 2016.

Dataset

The dataset used for this task is Stanford Natural Language Inference (SNLI). Pretrained GloVe embeddings obtained from common crawl with 840B tokens used for words.

Requirements

  • Python>=3
  • NumPy
  • TensorFlow>=1.8

Usage

Download dataset from Stanford Natural Language Inference, then move snli_1.0_train.jsonl, snli_1.0_dev.jsonl, snli_1.0_test.jsonl into ./SNLI/raw data.

# move dataset to the right place
mkdir -p ./SNLI/raw\ data
mv snli_1.0_*.jsonl ./SNLI/raw\ data

Data preprocessing for convert source data into an easy-to-use format.

python3 Utils.py

Default hyper-parameters have been stored in config file in the path of ./config/config.yaml.

Training model:

python3 Train.py

Test model:

python3 Test.py

Results

Decomposable Attention Reported Our Experiments
Accuracy 86.3% 85.4%
Open Source Agenda is not affiliated with "Decomposable Attention" Project. README Source: HsiaoYetGun/Decomposable-Attention
Stars
36
Open Issues
4
Last Commit
5 years ago

Open Source Agenda Badge

Open Source Agenda Rating