[CVPR 2024 Oral] Official repository of FMA-Net
If you find FMA-Net useful, please consider citing:
@inproceedings{youk2024fmanet,
author = {Geunhyuk Youk and Jihyong Oh and Munchurl Kim},
title = {FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring},
booktitle = {CVPR},
year = {2024},
}
- Python 3.9, PyTorch >= 1.9.1
- Platforms: Ubuntu 22.04, cuda 11.8
Pre-trained model can be downloaded from here.
# download code
git clone https://github.com/KAIST-VICLab/FMA-Net
cd FMA-Net
# train FMA-Net on REDS dataset
python main.py --train --config_path experiment.cfg
# test FMA-Net on REDS dataset
python main.py --test --config_path experiment.cfg
# test on your own datasets
python main.py --test_custom --config_path experiment.cfg
Please visit our project page and demo video for diverse visual results.
The source codes including the checkpoint can be freely used for research and education only. Any commercial use should get formal permission from the principal investigator (Prof. Munchurl Kim, [email protected]).
This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT): No. 2021-0-00087, Development of high-quality conversion technology for SD/HD low-quality media and No. RS2022-00144444, Deep Learning Based Visual Representational Learning and Rendering of Static and Dynamic Scenes.