A state-of-the-art Video Frame Interpolation Method using feature flows blending. (CVPR 2020)
A state-of-the-art Video Frame Interpolation Method using deep semantic flows blending.
FeatureFlow: Robust Video Interpolation via Structure-to-texture Generation (IEEE Conference on Computer Vision and Pattern Recognition 2020)
Ps: requirements.txt
is provided, but do not use it directly. It is just for reference because it contains another project's dependencies.
Click the picture to Download one of them or click Here(Google) or Here(Baidu)(key: oav2) to download 360p demos.
360p demos(including comparisons):
720p demos:
$ cd mmdetection
$ pip install -r requirements/build.txt
$ pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
$ pip install -v -e . # or "python setup.py develop"
$ pip list | grep mmdet
$ unzip vimeo_interp_test.zip
$ cd vimeo_interp_test
$ mkdir sequences
$ cp target/* sequences/ -r
$ cp input/* sequences/ -r
Ps: For your convenience, you can only download the bdcn_pretrained_on_bsds500.pth: Google Drive or all of the pre-trained bdcn models its authors provided: Google Drive. For a Baidu Cloud link, you can resort to BDCN's GitHub repository.
$ pip install scikit-image visdom tqdm prefetch-generator
Baidu Cloud: ae4x
Place FeFlow.ckpt to ./checkpoints/.
Baidu Cloud: pc0k
$ CUDA_VISIBLE_DEVICES=0 python eval_Vimeo90K.py --checkpoint ./checkpoints/FeFlow.ckpt --dataset_root ~/datasets/videos/vimeo_interp_test --visdom_env test --vimeo90k --imgpath ./results/
$ CUDA_VISIBLE_DEVICES=0 python sequence_run.py --checkpoint checkpoints/FeFlow.ckpt --video_path ./yourvideo.mp4 --t_interp 4 --slow_motion
--t_interp
sets frame multiples, only power of 2(2,4,8...) are supported. Use flag --slow_motion
to slow down the video which maintains the original fps.
The output video will be saved as output.mp4 in your working diractory.
Training Code train.py is available now. I can't run it for comfirmation now because I've left the Lab, but I'm sure it will work with right argument settings.
$ CUDA_VISIBLE_DEVICES=0,1 python train.py <arguments>
--GEN_DE
which is the flag to set the model to Stage-I or Stage-II.@InProceedings{Gui_2020_CVPR,
author = {Gui, Shurui and Wang, Chaoyue and Chen, Qihua and Tao, Dacheng},
title = {FeatureFlow: Robust Video Interpolation via Structure-to-Texture Generation},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
See MIT License