FEQE Save

Official code (Tensorflow) for paper "Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks"

Project README

FEQE

Official implementation for Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks, ECCV workshop 2018

Citation

Please cite our project if it is helpful for your research

@InProceedings{Vu_2018_ECCV_Workshops},
author = {Vu, Thang and Van Nguyen, Cao and Pham, Trung X. and Luu, Tung M. and Yoo, Chang D.},
title = {Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks},
booktitle = {The European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}

Comparison of proposed FEQE with other state-of-the-art super-resolution and enhancement methods

Network architecture

Proposed desubpixel

PIRM 2018 challenge results (super-resolution on mobile devices task)

TEAM_ALEX placed the first in overall benchmark score. Refer to PIRM 2018 for details.

Dependencies

  • 1 Nvidia GPU (4h training on Titan Xp)
  • Python3
  • tensorflow 1.10+
  • tensorlayer 1.9+
  • tensorboardX 1.4+

Download datasets, models, and results

Dataset

  • Train: DIV2K (800 2K-resolution images)
  • Valid: DIV2K (9 val images)
  • Test: Set5, Set14, B100, Urban100
  • Download train+val+test datasets
  • Download test only dataset

Pretrained models

Paper results

FEQE/
├── checkpoint
│   ├── FEQE
│   └── FEQE-P
├── data
│   ├── DIV2K_train_HR
│   ├── DIV2K_valid_HR_9
│   └── test_benchmark
├── docs
├── model
├── results
└── vgg_pretrained
    └── imagenet-vgg-verydeep-19.mat

Quick start

  1. Download test only dataset dataset and put into data/ directory
  2. Download pretrained models and put into checkpoint/ directory
  3. Run python test.py --dataset <DATASET_NAME>
  4. Results will be saved into results/ directory

Training

  1. Download train+val+test datasets dataset and put into data/ directory
  2. Download pretrained VGG and put into vgg_pretrained/ directory
  3. Pretrain with MSE loss on scale 2: python train.py --checkpoint checkpoint/mse_s2 --alpha_vgg 0 --scale 2 --phase pretrain
  4. Finetune with MSE loss on scale 4 (FEQE-P): python main.py --checkpoint checkpoint/mse_s4 --alpha_vgg 0 --pretrained_model checkpoint_test/mse_s2/model.ckpt
  5. Finetune with full loss on scale 4: python main.py --checkpoint checkpoint/full_s4 ---pretrained_model checkpoint_test/mse_s4/model.ckpt
  6. All Models with be saved into checkpoint/ direcory

Visualization

  1. Start tensorboard: tensorboard --logdir checkpoint
  2. Enter: YOUR_IP:6006 to your web browser.
  3. Result ranges should be similar to:

Tensorboard

Comprehensive testing

  1. Test FEQE model (defaults): follow Quick start
  2. Test FEQE-P model: python test.py --dataset <DATASET> --model_path <FEQE-P path>
  3. Test perceptual quality: refer to PIRM validation code

Quantitative and Qualitative results

PSNR/SSIM/Perceptual-Index comparison. Red indicates the best results

Running time comparison. Red indicates the best results

Qualitative comparison

Open Source Agenda is not affiliated with "FEQE" Project. README Source: thangvubk/FEQE
Stars
128
Open Issues
3
Last Commit
5 years ago
Repository
License
MIT

Open Source Agenda Badge

Open Source Agenda Rating