:page_facing_up: A PyTorch implementation of Paragraph Vectors (doc2vec).
A PyTorch implementation of Paragraph Vectors (doc2vec).
All models minimize the Negative Sampling objective as proposed by T. Mikolov et al. [1]. This provides scope for sparse updates (i.e. only vectors of sampled noise words are used in forward and backward passes). In addition to that, batches of training data (with noise sampling) are generated in parallel on a CPU while the model is trained on a GPU.
Caveat emptor! Be warned that paragraph-vectors
is in an early-stage development phase. Feedback, comments, suggestions, contributions, etc. are more than welcome.
paragraph-vectors
library.git clone https://github.com/inejc/paragraph-vectors.git
cd paragraph-vectors
pip install -e .
Note that installation in a virtual environment is the recommended way.
data/example.csv
----------------
text,...
"In the week before their departure to Arrakis, when all the final scurrying about had reached a nearly unbearable frenzy, an old crone came to visit the mother of the boy, Paul.",...
"It was a warm night at Castle Caladan, and the ancient pile of stone that had served the Atreides family as home for twenty-six generations bore that cooled-sweat feeling it acquired before a change in the weather.",...
...
python train.py start --data_file_name 'example.csv' --num_epochs 100 --batch_size 32 --num_noise_words 2 --vec_dim 100 --lr 1e-3
data_file_name
: strmodel_ver
: str, one of ('dm', 'dbow'), default='dbow'vec_combine_method
: str, one of ('sum', 'concat'), default='sum'context_size
: int, default=0num_noise_words
: intvec_dim
: intnum_epochs
: intbatch_size
: intlr
: floatsave_all
: bool, default=Falsegenerate_plot
: bool, default=Truemax_generated_batches
: int, default=5num_workers
: int, default=1num_workers
> 1.python export_vectors.py start --data_file_name 'example.csv' --model_file_name 'example_model.dbow_numnoisewords.2_vecdim.100_batchsize.32_lr.0.001000_epoch.25_loss.0.981524.pth.tar'
data_file_name
: strmodel_file_name
: strdata_file_name
dataset).First two principal components (1% cumulative variance explained) of 300-dimensional document vectors trained on arXiv abstracts. Shown are two subcategories from Computer Science. Dataset was comprised of 74219 documents and 91417 unique words.