[CVPR'2019] PEN-Net: Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting
Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting
Yanhong Zeng, Jianlong Fu, Hongyang Chao, and Baining Guo.
In CVPR 2019.
Existing inpainting works either fill missing regions by copying fine-grained image patches or generating semantically reasonable patches (by CNN) from region context, while neglect the fact that both visual and semantic plausibility are highly-demanded.
Our proposals combine these two mechanisms by,
We re-implement PEN-Net in Pytorch for faster speed, which is slightly different from the original Tensorflow version used in our paper. Each triad shows original image, masked input and our result.
python train.py -c [config_file] -n [model_name] -m [mask_type] -s [image_size]
.python train.py -c configs/celebahq.json -n pennet -m square -s 256
python train.py -n pennet -m square -s 256
.python test.py -c [config_file] -n [model_name] -m [mask_type] -s [image_size]
.python test.py -c configs/celebahq.json -n pennet -m square -s 256
python eval.py -r [result_path]
Download the models below and put it under release_model/
CELEBA-HQ | DTD | Facade | Places2
We also provide more results of central square below for your comparisons
Visualization on TensorBoard for training is supported.
Run tensorboard --logdir release_model --port 6006
to view training progress.
If any part of our paper and code is helpful to your work, please generously cite with:
@inproceedings{yan2019PENnet,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={1486--1494},
year = {2019}
}
Licensed under an MIT license.