Reference github repository for the paper "Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data". We propose a procedure to generate realistic DP data synthetically. Our synthesis approach mimics the optical image formation found on DP sensors and can be applied to virtual scenes rendered with standard computer software. Leveraging these realistic synthetic DP images, we introduce a new recurrent convolutional network (RCN) architecture that can improve defocus deblurring results and is suitable for use with single-frame and multi-frame data captured by DP sensors.
Abdullah Abuolaim1
Mauricio Delbracio2
Damien Kelly2
Michael S. Brown1
Peyman Milanfar2
1York University 2Google Research
Reference github repository for the paper Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data. Abuolaim et al., proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 2021 (YouTube presentation). If you use our dataset or code, please cite our paper:
@article{abuolaim2021learning,
title={Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data},
author={Abuolaim, Abdullah and Delbracio, Mauricio and Kelly, Damien and Brown, Michael S and Milanfar, Peyman},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}
The code tested with:
Despite not tested, the code may work with library versions other than the specified
git clone https://github.com/Abdullah-Abuolaim/recurrent-defocus-deblurring-synth-dual-pixel.git
cd ./recurrent-defocus-deblurring-synth-dual-pixel/synthetic_dp_defocus_blur_code/
Download SYNTHIA-SF dataset or visit SYNTHIA downloads
Run the code in synthetic_dp_defocus_blur_code
directory to start generating data as follows:
ython synthetic_dp_defocus_blur.py --data_dir ./SYNTHIA-SF/ --radial_dis True
Running above will create the generated dataset dd_dp_dataset_synth_vid
in synthetic_dp_defocus_blur_code
The generated dataset is organized based on the following directory structure
The code tested with:
Despite not tested, the code may work with library versions other than the specified
git clone https://github.com/Abdullah-Abuolaim/recurrent-defocus-deblurring-synth-dual-pixel.git
cd ./recurrent-defocus-deblurring-synth-dual-pixel/rdpd_code/
All the trained models used in the main paper and supplemental material can be downloaded link
Place the downloaded .hdf5
model inside ModelCheckpoints
for testing
Download DP defocus deblurring dataset [1] link, or visit project GitHub link
After downloading and generating datasets, place them in the same directory e.g., dd_dp_datasets
Run main.py
in rdpd_code
directory as follows:
ython main.py --op_phase test --test_model RDPD+ --data_dir ./dd_dp_datasets/
dd_dp_dataset_canon
and dd_dp_dataset_synth_vid
The results of the tested models will be saved in results
directory that will be created inside rdpd_code
Recall that you might need to change
Download DP defocus deblurring dataset [1] link, or visit project GitHub link
After downloading and generating datasets, place them in the same directory e.g., dd_dp_datasets
Run main.py
in rdpd_code
directory as follows:
ython main.py --op_phase train --ms_edge_loss True --data_dir ./dd_dp_datasets/
dd_dp_dataset_canon
and dd_dp_dataset_synth_vid
Other training options
convLSTM
unitThe trained model and checkpoints will be saved in ModelCheckpoints
after each epoch
Should you have any question/suggestion, please feel free to reach out:
Abdullah Abuolaim ([email protected])
[1] Abdullah Abuolaim and Michael S. Brown. Defocus Deblurring Using Dual-Pixel Data. In ECCV, 2020.