Capsule research with our trivial contribution
Hongyang Li, Xiaoyang Guo, Bo Dai, et al.
The official implementation in Pytorch, paper published in ECCV 2018.
0.3.x
or 0.4.x
; Linux; Python 3.x
On a research side:
The easiest way to run the code in the terminal, after cloning/downloading this repo is:
python main.py
If you are more ambitious to play with the parameters and/or assign the experiment to specific GPUs:
# gpu_id index
CUDA_VISIBLE_DEVICES=0,2 \
python main.py \
--device_id=0,2 \
--experiment_name=encapnet_default \
--dataset=cifar10 \
--net_config=encapnet_set_OT \
# other arguments here ...
For a full list of arguments, see option/option.py
file.
Note how we launch the multi-gpu mode above (pass index 0,2
to both environment
variables and arguments).
This project is organized in the most common manner:
| main.py
| |
| layers/train_val.py
| |
| layers/network.py # forward flow control
| |
| --> model define in net_config.py
| --> cap_layer.py # capsule layer submodules; core part
| --> OT_module.py # optimal transport unit; core part
| data/create_dset.py
| option/option.py
| utils
Datasets will be automatically downloaded and put under data
folder. Output files (log, model)
reside in the --base_save_folder
(default is result
).
To add more structures or change components:
if-else
statement starting from this
net_config.py file.To add one encapsulated layer with (or not) OT unit in your own network:
0.4.x
visdom
. Use visdom
to visualize training dynamics.Please cite in the following manner if you find it useful in your research:
@inproceedings{li2018encapsulation,
author = {Hongyang Li and Xiaoyang Guo and Bo Dai and Wanli Ouyang and Xiaogang Wang},
title = {Neural Network Encapsulation},
booktitle = {ECCV},
year = {2018}
}