[AAAI 2019] Self-Ensembling Attention Networks: Addressing Domain Shift for Semantic Segmentation
Pytorch implementation of our method for domain adaptation in semantic segmentation task.
Self-Ensembling Attention Networks: Addressing Domain Shift for Semantic Segmentation
Please cite our paper if you find it useful for your research.
@inproceedings{SEAN,
title={Self-Ensembling Attention Networks: Addressing Domain Shift for Semantic Segmentation},
author={Xu, Yonghao and Du, Bo and Zhang, Lefei and Zhang, Qian and Wang, Guoli and Zhang, Liangpei},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={33},
pages={5581--5588},
year={2019}
}
Install Pytorch 0.4.0
from https://github.com/pytorch/pytorch with Python 3.6
.
Clone this repo.
git clone https://github.com/YonghaoXu/SEANet
Download the GTA-5 Dataset.
Download the SYNTHIA-RAND-CITYSCAPES Dataset.
Download the Cityscapes Dataset.
--data_dir_source
in SEAN_GTA5.py
with your GTA-5 dataset folder.--data_dir_target
in SEAN_GTA5.py
with your Cityscapes dataset folder.--restore_from
in SEAN_GTA5.py
with your pretrained VGG model path.dataset/gta5_dataset.py
and dataset/cityscapes_dataset.py
for further guidance about how the images and ground-truth files are organized.python SEAN_GTA5.py
--data_dir_source
in SEAN_Synthia.py
with your Synthia dataset folder.--data_dir_target
in SEAN_Synthia.py
with your Cityscapes dataset folder.--restore_from
in SEAN_Synthia.py
with your pretrained VGG model path.dataset/synthia_dataset.py
and dataset/cityscapes16_dataset.py
for further guidance about how the images and ground-truth files are organized.python SEAN_Synthia.py
--data_dir
in evaluation.py
with your Cityscapes dataset folder.--restore_from
in evaluation.py
with your trained model path. You can also download our GTA-5 to Cityscapes model for a look.python evaluation.py
test_mIoU
function in evaluation.py
with the test_mIoU16
function. Since there are only 16 categories in common in this case, the code for writing the segmentation maps parts needs to be further modified. If you want to share your implementation for this issue, please pull a request.--attention_threshold
would be detrimental to the performance of the framework. Empirically, 0 to 0.3 is a suitable range for this parameter.