[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation
[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation
Chang-Bin Zhang1, Jia-Wen Xiao1, Xialei Liu1, Ying-Cong Chen2, Ming-Ming Cheng1
1 College of Computer Science, Nankai University
2 The Hong Kong University of Science and Technology
There are two commonly used settings, disjoint
and overlapped
.
In the disjoint
setting, assuming we know all classes in the future, the images in the current training step do not contain any classes in the future. The overlapped
setting allows potential classes in the future to appear in the current training images. We call each training on the newly added dataset as a step. Formally, X-Y
denotes the continual setting in our experiments, where X
denotes the number of classes that we need to train in the first step. In each subsequent learning step, the newly added dataset contains Y
classes.
There are some settings reported in our paper. You can also try it on other any custom settings.
Continual Class Segmentation:
Continual Domain Segmentation:
Extension Experiments on Continual Classification
Method | Pub. | 15-5 disjoint | 15-5 overlapped | 15-1 disjoint | 15-1 overlapped | 10-1 disjoint | 10-1 overlapped | 5-3 overlapped | 5-3 disjoint |
---|---|---|---|---|---|---|---|---|---|
LWF | TPAMI 2017 | 54.9 | 55.0 | 5.3 | 5.5 | 4.3 | 4.8 | ||
ILT | ICCVW 2019 | 58.9 | 61.3 | 7.9 | 9.2 | 5.4 | 5.5 | ||
MiB | CVPR 2020 | 65.9 | 70.0 | 39.9 | 32.2 | 6.9 | 20.1 | ||
SDR | CVPR 2021 | 67.3 | 70.1 | 48.7 | 39.5 | 14.3 | 25.1 | ||
PLOP | CVPR 2021 | 64.3 | 70.1 | 46.5 | 54.6 | 8.4 | 30.5 | ||
Ours | CVPR 2022 | 67.3 | 72.4 | 54.7 | 59.4 | 18.2 | 34.3 | 42.88 |
Method | Pub. | 100-50 overlapped | 100-10 overlapped | 50-50 overlapped | 100-5 overlapped |
---|---|---|---|---|---|
ILT | ICCVW 2019 | 17.0 | 1.1 | 9.7 | 0.5 |
MiB | CVPR 2020 | 32.8 | 29.2 | 29.3 | 25.9 |
PLOP | CVPR 2021 | 32.9 | 31.6 | 30.4 | 28.7 |
Ours | CVPR 2022 | 34.5 | 32.1 | 32.5 | 29.6 |
Method | Pub. | 11-5 | 11-1 | 1-1 |
---|---|---|---|---|
LWF | TPAMI 2017 | 59.7 | 57.3 | 33.0 |
LWF-MC | CVPR 2017 | 58.7 | 57.0 | 31.4 |
ILT | ICCVW 2019 | 59.1 | 57.8 | 30.1 |
MiB | CVPR 2020 | 61.5 | 60.0 | 42.2 |
PLOP | CVPR 2021 | 63.5 | 62.1 | 45.2 |
Ours | CVPR 2022 | 64.3 | 63.0 | 48.9 |
sh data/download_voc.sh
sh data/download_ade.sh
sh data/download_cityscapes.sh
conda install --yes --file requirements.txt
(Higher version pytorch should be suitable.)pretrained/
scripts/
. You can train the model bysh scripts/voc/rcil_10-1-overlap.sh
You can simply modify the bash file by adding --test
, like
CUDA_VISIBLE_DEVICES=${GPU} python3 -m torch.distributed.launch --master_port ${PORT} --nproc_per_node=${NB_GPU} run.py --data xxx ... --test
If this work is useful for you, please cite us by:
@inproceedings{zhang2022representation,
title={Representation Compensation Networks for Continual Semantic Segmentation},
author={Zhang, Chang-Bin and Xiao, Jia-Wen and Liu, Xialei and Chen, Ying-Cong and Cheng, Ming-Ming},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={7053--7064},
year={2022}
}
If you have any questions about this work, please feel easy to contact us (zhangchbin ^ mail.nankai.edu.cn or zhangchbin ^ gmail.com).
This code is heavily borrowed from [MiB] and [PLOP].
There is a collection of AWESOME things about continual semantic segmentation, including papers, code, demos, etc. Feel free to pull request and star.