Recurrent Defocus Deblurring Synth Dual Pixel Save

Reference github repository for the paper "Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data". We propose a procedure to generate realistic DP data synthetically. Our synthesis approach mimics the optical image formation found on DP sensors and can be applied to virtual scenes rendered with standard computer software. Leveraging these realistic synthetic DP images, we introduce a new recurrent convolutional network (RCN) architecture that can improve defocus deblurring results and is suitable for use with single-frame and multi-frame data captured by DP sensors.

Project README

Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data

Abdullah Abuolaim1     Mauricio Delbracio2     Damien Kelly2     Michael S. Brown1     Peyman Milanfar2
1York University         2Google Research

RDPD summary teaser figure

Reference github repository for the paper Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data. Abuolaim et al., proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 2021 (YouTube presentation). If you use our dataset or code, please cite our paper:

@article{abuolaim2021learning,
  title={Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data},
  author={Abuolaim, Abdullah and Delbracio, Mauricio and Kelly, Damien and Brown, Michael S and Milanfar, Peyman},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

Synthetic Dataset

Prerequisites

  • The code tested with:

    • Python 3.8.3
    • Numpy 1.19.1
    • Scipy 1.5.2
    • Wand 0.6.3
    • Imageio 2.9.0
    • OpenCV 4.4.0

    Despite not tested, the code may work with library versions other than the specified

Installation

  • Clone with HTTPS this project to your local machine
git clone https://github.com/Abdullah-Abuolaim/recurrent-defocus-deblurring-synth-dual-pixel.git
cd ./recurrent-defocus-deblurring-synth-dual-pixel/synthetic_dp_defocus_blur_code/

Synthetic dual-pixel (DP) views based on defocus blur

  • Download SYNTHIA-SF dataset or visit SYNTHIA downloads

    • SYNTHIA-SF dataset contains six image sequences: SEQ1-SEQ6
    • Traning: SEQ1, SEQ2, SEQ4, SEQ5, SEQ6
    • Testing: SEQ3
  • Run the code in synthetic_dp_defocus_blur_code directory to start generating data as follows:

    ython synthetic_dp_defocus_blur.py --data_dir ./SYNTHIA-SF/ --radial_dis True
    
    • --data_dir: path to the downloaded SYNTHIA-SF directory
    • --radial_dis: to apply radial distortion on the generated DP views
  • Running above will create the generated dataset dd_dp_dataset_synth_vid in synthetic_dp_defocus_blur_code

    • Generate synthetic image sequence for each camera set (i.e., the five camera sets defined in the main paper)
    • There will be 30 image sequences generated in total (5 camera sets x 6 image sequences)
  • The generated dataset is organized based on the following directory structure synthetic dataset structure

    • $dir_name$_c: directory of the final output combined images
    • $dir_name$_l: directory of the corresponding DP left view images
    • $dir_name$_r: directory of the corresponding DP right view images
    • source: images exhibiting defocus blur
    • target: the corresponding all-in-focus images
    • seq_n: image sequence number

RDPD Codes and Models

Prerequisites

  • The code tested with:

    • Python 3.8.3
    • TensorFlow 2.2.0
    • Keras 2.4.3
    • Numpy 1.19.1
    • Scipy 1.5.2
    • Scikit-image 0.16.2
    • Scikit-learn 0.23.2
    • OpenCV 4.4.0

    Despite not tested, the code may work with library versions other than the specified

Installation

  • Clone with HTTPS this project to your local machine
git clone https://github.com/Abdullah-Abuolaim/recurrent-defocus-deblurring-synth-dual-pixel.git
cd ./recurrent-defocus-deblurring-synth-dual-pixel/rdpd_code/

Testing

  • All the trained models used in the main paper and supplemental material can be downloaded link

  • Place the downloaded .hdf5 model inside ModelCheckpoints for testing

  • Download DP defocus deblurring dataset [1] link, or visit project GitHub link

  • After downloading and generating datasets, place them in the same directory e.g., dd_dp_datasets

  • Run main.py in rdpd_code directory as follows:

    ython main.py --op_phase test --test_model RDPD+ --data_dir ./dd_dp_datasets/
    
    • --op_phase: operation phase training or testing
    • --test_model: test model name
    • --data_dir: path to the directory that has both datasets i.e., dd_dp_dataset_canon and dd_dp_dataset_synth_vid
  • The results of the tested models will be saved in results directory that will be created inside rdpd_code

Recall that you might need to change

Training

  • Download DP defocus deblurring dataset [1] link, or visit project GitHub link

  • After downloading and generating datasets, place them in the same directory e.g., dd_dp_datasets

  • Run main.py in rdpd_code directory as follows:

    ython main.py --op_phase train --ms_edge_loss True --data_dir ./dd_dp_datasets/
    
    • --op_phase: operation phase training or testing
    • --ms_edge_loss: whether to use our edge loss or not in addition to the typical MSE loss
    • --data_dir: path to the directory that has both datasets i.e., dd_dp_dataset_canon and dd_dp_dataset_synth_vid
  • Other training options

    • --ms_edge_loss_weight_x: the weight of our edge loss at the vertical direction
    • --ms_edge_loss_weight_y: the weight of our edge loss at the horizontal direction
    • --patch_size: training patch size
    • --img_mini_b: image mini-batch size
    • --vid_mini_b: video mini-batch size
    • --num_frames: number of video frames
    • --epoch: number of training epochs
    • --lr: initial learning rate
    • --schedule_lr_rate': learning rate scheduler (after how many epochs to decrease)
    • --dropout_rate': dropout rate of the convLSTM unit
  • The trained model and checkpoints will be saved in ModelCheckpoints after each epoch

Contact

Should you have any question/suggestion, please feel free to reach out:

Abdullah Abuolaim ([email protected])

  • ECCV'18 paper: Revisiting Autofocus for Smartphone Cameras   [project page]
  • WACV'20 paper: Online Lens Motion Smoothing for Video Autofocus   [project page]   [presentation]
  • ICCP'20 paper: Modeling Defocus-Disparity in Dual-Pixel Sensors   [github]   [presentation]
  • ECCV'20 paper: Defocus Deblurring Using Dual-Pixel Data   [project page]   [github]   [presentation]
  • CVPRW'21 paper: NTIRE 2021 Challenge for Defocus Deblurring Using Dual-pixel Images: Methods and Results   [pdf]   [presentation]
  • WACV'22 paper: Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning   [github]   [presentation]
  • WACVW'22 paper: Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels   [pdf]   [presentation]

References

[1] Abdullah Abuolaim and Michael S. Brown. Defocus Deblurring Using Dual-Pixel Data. In ECCV, 2020.

Open Source Agenda is not affiliated with "Recurrent Defocus Deblurring Synth Dual Pixel" Project. README Source: Abdullah-Abuolaim/recurrent-defocus-deblurring-synth-dual-pixel

Open Source Agenda Badge

Open Source Agenda Rating