(ECCV 2020) Conditional Sequential Modulation for Efficient Global Image Retouching
By Jingwen He*, Yihao Liu*, Yu Qiao, and Chao Dong (* indicates equal contribution)
Left: Compared with existing state-of-the-art methods, our method achieves superior performance with extremely few parameters (1/13 of HDRNet and 1/250 of White-Box). The diameter of the circle represents the amount of trainable parameters. Right: Image retouching examples.
The first row shows smooth transition effects between different styles (expert A to B) by image interpolation. In the second row, we use image interpolation to control the retouching strength from input image to the automatic retouched result. We denote the interpolation coefficient α for each image.
@article{he2020conditional,
title={Conditional Sequential Modulation for Efficient Global Image Retouching},
author={He, Jingwen and Liu, Yihao and Qiao, Yu and Dong, Chao},
journal={arXiv preprint arXiv:2009.10390},
year={2020}
}
pip install numpy opencv-python lmdb pyyaml
pip install tb-nightly future
pip install tensorboardX
Here, we provide the preprocessed datasets: MIT-Adobe FiveK dataset, which contains both training pairs and testing pairs.
options/test/test_Enhance.yml
. e.g., dataroot_GT
, dataroot_LQ
, and pretrain_model_G
.
(We provide a pretrained model in experiments/pretrain_models/csrnet.pth
)python test_CSRNet.py -opt options/test/test_Enhance.yml
calculate_metrics.py
: input_path
, GT_path
(Line 139, 140). Then run:python calculate_metrics.py
options/train/train_Enhance.yml
. e.g., dataroot_GT
, dataroot_LQ
.python train.py -opt options/train/train_Enhance.yml