Byte Cup 2018 International Machine Learning Contest (3rd prize)
The Byte Cup 2018 International Machine Learning Contest aims to generate corresponding titles for the given articles automatically. All data for training, validation and testing are from TopBuzz (a product of Bytedance) and other open sources. In this competition, we build a hybrid extractive-abstractive architecture with reinforcement learning (RL) based policy. The model first employs an extractor agent to select salient sentences and then employs an abstractive network to rewrite the extracted sentences, using actor-critic policy gradient to learn the sentence saliency with dropout.
Dataset
We follow the instructions here for preprocessing the dataset. Meanwhile, we conduct data cleaning by removing duplicates (i.e., both content and title of 2 articles are the same) and cleaning some invalid characters (e.g., URLs, image comments, javascript strings, etc.). After that, all data files train
, val
, test
and the vocabulary file vocab_cnt.pkl
are located in a specified data directory, e.g. ./bytecup/finished_files/
.
Pretrain word embeddings
python3 train_word2vec.py --data=./bytecup/finished_files --path=./bytecup/models/word2vec
python3 make_extraction_labels.py --data=./bytecup/finished_files
python3 train_abstractor.py --data=./bytecup/finished_files --path=./bytecup/models/abstractor --w2v=./bytecup/models/word2vec/word2vec.300d.332k.bin
python3 train_extractor.py --data=./bytecup/finished_files --path=./bytecup/models/extractor --w2v=./bytecup/models/word2vec/word2vec.300d.332k.bin
python3 train_full_rl.py --data=./bytecup/finished_files --path=./bytecup/models/save --abs_dir=./bytecup/models/abstractor --ext_dir=./bytecup/models/extractor
python3 decode_full_model.py --data=./bytecup/finished_files --path=./bytecup/output --model_dir=./bytecup/models/save --[val/test]
python3 commit_data.py --decode_dir=./bytecup/output --result_dir=./bytecup/result
[1] Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting (ACL-18)
[2] Global Encoding for Abstractive Summarization (ACL-18)
[3] Regularizing and Optimizing LSTM Language Models (arXiv 2017)