CLCC CVPR21 Save

An official TensorFlow implementation of “CLCC: Contrastive Learning for Color Constancy” accepted at CVPR 2021.

Project README

CLCC: Contrastive Learning for Color Constancy (CVPR 2021)

Yi-Chen Lo*, Chia-Che Chang*, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang, Kevin Jou

MediaTek Inc., Hsinchu, Taiwan

(*) indicates equal contribution.

Paper | Poster | 5-min Video | 5-min Slides | 10-min Slides

Important update(2022/03/09)

Dear user, the dataset and imagenet pretrained weight, our released dataset and imagenet pretrained weight are automatically deleted by cloud storage service (Mega). Due to several reasons (large size of dataset that cannot be freely uploaded to other cloud storage services, company's policy for releasing dataset and personal heavy workload). We suggest user to download and re-processing the dataset by following the issue #2. We're sorry for the inconvenience.

Dataset

We preprocess each fold of dataset and stored in .pkl format for each sample. Each sample contains:

  • Raw image: Mask color checker; Subtract black level; Convert to uint16 [0, 65535] BGR numpy array with shape (H, W, 3).
  • RGB label: L2-normalized numpy vector with shape (3,).
  • Color checker: [0, 4095] BGR numpy array with shape (24, 3) for raw-to-raw mapping presented in our paper (see util/raw2raw.py and also section 4.3 in our paper). A few of them are stored in all zeros due to the failure of color checker detection. Note that we convert it into RGB format during preprocessing in dataloader.py, and our raw-to-raw mapping algorithm also manipulates it in RGB format.

Training and Evaluation

CLCC is a Python 3 & TensorFlow 1.x implementation based on FC4 codebase.

  • Dataset preparation: Download preprocessed dataset here. Please make sure your dataset folder is structured as <DATA_DIR>/<DATA_NAME>/<FOLD_ID> (e.g., data/gehler/0, just like how it is structured in download source).

  • Pretrained weights preparation: Download ImageNet-pretrained weights here. Place pretrained weight files under pretrained_models/imagenet/.

  • Training: Modify config.py (i.e., you may want to rename EXP_NAME and specify training data DATA_NAME, TRAIN_FOLDS, TEST_FOLDS) and execute train.py. Checkpoints will be saved under ckpts/EXP_NAME during training.

  • Evaluation: Once training is done, you can evaluate checkpoint with eval.py on a specific test fold. We recommend to refer to scripts/eval_squeezenet_clcc_gehler.sh for 3-fold cross-validation.

Acknowledgments

Citation

@InProceedings{Lo_2021_CVPR,
    author    = {Lo, Yi-Chen and Chang, Chia-Che and Chiu, Hsuan-Chao and Huang, Yu-Hao and Chen, Chia-Ping and Chang, Yu-Lin and Jou, Kevin},
    title     = {CLCC: Contrastive Learning for Color Constancy},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {8053-8063}
}
Open Source Agenda is not affiliated with "CLCC CVPR21" Project. README Source: howardyclo/CLCC-CVPR21

Open Source Agenda Badge

Open Source Agenda Rating