Interpretable Adv Save

Code for Interpretable Adversarial Perturbation in Input Embedding Space for Text, IJCAI 2018.

Project README

Interpretable Adversarial Perturbation

Code for Interpretable Adversarial Perturbation in Input Embedding Space for Text, IJCAI 2018.

This code reproduce the our paper with Chainer.

Setup Environment

Please install Chainer and Cupy.

You can set up the environment easily with this Setup.md.

Download Pretrain Model

Please download pre-trained model. From http://sato-motoki.com/research/vat/imdb_pretrained_lm_ijcai.model

$ wget http://sato-motoki.com/research/vat/imdb_pretrained_lm_ijcai.model

Run

Pretrain

$ python -u pretrain.py -g 0 --layer 1 --dataset imdb --bproplen 100 --batchsize 32 --out results_imdb_adaptive --adaptive-softmax

Note that this command takes about 30 hours with single GPU.

Train (iVAT: Interpretable Semi-supervised setting)

Please add --use_semi_data 1 and --use_attn_d 1 to use iVAT (ours).

$ python train.py --gpu=0 --n_epoch=30 --batchsize 32 --save_name=imdb_model_vat --lower=0 --use_adv=0 --xi_var=15.0  --use_unlabled=1 --alpha=0.001 --alpha_decay=0.9998 --min_count=1 --ignore_unk=1 --pretrained_model imdb_pretrained_lm_ijcai.model --use_exp_decay=1 --clip=5.0 --batchsize_semi 96 --use_semi_data 1 --use_attn_d 1 

Train (VAT: Semi-supervised setting)

Please add --use_semi_data 1 to use VAT.

$ python train.py --gpu=0 --n_epoch=30 --batchsize 32 --save_name=imdb_model_vat --lower=0 --use_adv=0 --xi_var=5.0  --use_unlabled=1 --alpha=0.001 --alpha_decay=0.9998 --min_count=1 --ignore_unk=1 --pretrained_model imdb_pretrained_lm_ijcai.model --use_exp_decay=1 --clip=5.0 --batchsize_semi 96 --use_semi_data 1

Note that this command takes about 8 hours with single GPU.

Train (iAdv: Interpretable Supervised setting)

Please add --use_adv 1 and --use_attn_d 1 to use iAdv.

$ python train.py --gpu=0 --n_epoch=30 --batchsize 32 --save_name=imdb_model_adv --lower=0 --use_adv 1 --xi_var=15.0  --use_unlabled=1 --alpha=0.001 --alpha_decay=0.9998 --min_count=1 --ignore_unk=1 --pretrained_model imdb_pretrained_lm_ijcai.model --use_exp_decay=1 --clip=5.0 

Train (Adv: Supervised setting)

Please add --use_adv 1 to use Adv.

$ python train.py --gpu=0 --n_epoch=30 --batchsize 32 --save_name=imdb_model_adv --lower=0 --use_adv 1 --xi_var=5.0  --use_unlabled=1 --alpha=0.001 --alpha_decay=0.9998 --min_count=1 --ignore_unk=1 --pretrained_model imdb_pretrained_lm_ijcai.model --use_exp_decay=1 --clip=5.0

Note that this command takes about 6 hours with single GPU.

Visualize Example

Visualize (iAdv)

Please change [trained_model_filename].

python analysis.py --gpu 0 --n_epoch 1 --load_trained_lstm [trained_model_filename] --use_attn_d=1 --use_adv 1 --analysis_mode 1 --lower 0 --batchsize 1 --save_name visualize_imdb_baseline.pickle
python visualize.py --pickle_filename visualize_imdb_baseline.pickle --savefig_dir figs

Authors

We thank Takeru Miyato (@takerum) who suggested that we reproduce the result of a [Miyato et al., 2017].

Reference

[Miyato et al., 2017]: Takeru Miyato, Andrew M. Dai and Ian Goodfellow
Adversarial Training Methods for Semi-Supervised Text Classification.
International Conference on Learning Representation (ICLR), 2017

[Sato et al., 2018]: Motoki Sato, Jun Suzuki, Hiroyuki Shindo, Yuji Matsumoto
Interpretable Adversarial Perturbation in Input Embedding Space for Text.
IJCAI-ECAI-2018
Open Source Agenda is not affiliated with "Interpretable Adv" Project. README Source: aonotas/interpretable-adv
Stars
41
Open Issues
1
Last Commit
4 years ago

Open Source Agenda Badge

Open Source Agenda Rating