TTSR Save

[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution

Project README

TTSR (CVPR2020)

Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020.

Contents

Introduction

We proposed an approach named TTSR for RefSR task. Compared to SISR, RefSR has an extra high-resolution reference image whose textures can be utilized to help super-resolve low-resolution input.

Contribution

  1. We are one of the first to introduce the transformer architecture into image generation tasks. More specifically, we propose a texture transformer with four closely-related modules for image SR which achieves significant improvements over SOTA approaches.
  2. We propose a novel cross-scale feature integration module for image generation tasks which enables our approach to learn a more powerful feature representation by stacking multiple texture transformers.

Approach overview

Main results

Requirements and dependencies

  • python 3.7 (recommend to use Anaconda)
  • python packages: pip install opencv-python imageio
  • pytorch >= 1.1.0
  • torchvision >= 0.4.0

Model

Pre-trained models can be downloaded from onedrive, baidu cloud(0u6i), google drive.

  • TTSR-rec.pt: trained with only reconstruction loss
  • TTSR.pt: trained with all losses

Quick test

  1. Clone this github repo
git clone https://github.com/FuzhiYang/TTSR.git
cd TTSR
  1. Download pre-trained models and modify "model_path" in test.sh
  2. Run test
sh test.sh
  1. The results are in "save_dir" (default: ./test/demo/output)

Dataset prepare

  1. Download CUFED train set and CUFED test set
  2. Make dataset structure be:
  • CUFED
    • train
      • input
      • ref
    • test
      • CUFED5

Evaluation

  1. Prepare CUFED dataset and modify "dataset_dir" in eval.sh
  2. Download pre-trained models and modify "model_path" in eval.sh
  3. Run evaluation
sh eval.sh
  1. The results are in "save_dir" (default: ./eval/CUFED/TTSR)

Train

  1. Prepare CUFED dataset and modify "dataset_dir" in train.sh
  2. Run training
sh train.sh
  1. The training results are in "save_dir" (default: ./train/CUFED/TTSR)

We also sincerely recommend some other excellent works related to us. :sparkles:

Citation

@InProceedings{yang2020learning,
author = {Yang, Fuzhi and Yang, Huan and Fu, Jianlong and Lu, Hongtao and Guo, Baining},
title = {Learning Texture Transformer Network for Image Super-Resolution},
booktitle = {CVPR},
year = {2020},
month = {June}
}

Contact

If you meet any problems, please describe them in issues or contact:

Open Source Agenda is not affiliated with "TTSR" Project. README Source: researchmm/TTSR
Stars
747
Open Issues
3
Last Commit
1 year ago
Repository
License
MIT

Open Source Agenda Badge

Open Source Agenda Rating