Automated Objects Removal Inpainter Save

Automated object remover Inpainter is a project that combines Semantic segmentation and EdgeConnect architectures with minor changes in order to remove specified object/s from list of 20 objects from all the input photos

Project README

Automated-Objects-Removal-Inpainter

Demo and Docker image on Replicate

Automated object remover Inpainter is a project that combines Semantic segmentation and EdgeConnect architectures with minor changes in order to remove specified objects from photos. For Semantic Segmentation, the code from pytorch has been adapted, whereas for EdgeConnect, the code has been adapted from https://github.com/knazeri/edge-connect.

This project is capable of removing objects from list of 20 different ones.It can be used as photo editing tool as well as for Data augmentation.

Python 3.8.5 and pytorch 1.5.1 have been used in this project.

How does it work?

Semantic segmenator model of deeplabv3/fcn resnet 101 has been combined with EdgeConnect. A pre-trained segmentation network has been used for object segmentation (generating a mask around detected object), and its output is fed to a EdgeConnect network along with input image with portion of mask removed. EdgeConnect uses two stage adversarial architecture where first stage is edge generator followed by image completion network. EdgeConnect paper can be found here and code in this repo

Prerequisite

  • python 3
  • pytorch 1.0.1 <
  • NVIDIA GPU + CUDA cuDNN (optional)

Installation

  • clone this repo
git clone https://github.com/sujaykhandekar/Automated-objects-removal-inpainter.git
cd Automated-objects-removal-inpainter

or alternately download zip file.

  • install pytorch with this command
conda install pytorch==1.5.1 torchvision==0.6.1 -c pytorch
  • install other python requirements using this command
pip install -r requirements.txt
  • Install one of the three pretrained Edgeconnect model and copy them in ./checkpoints directory
    Plcaes2 (option 1) CelebA (option 2) Paris-street-view (option 3)

or alternately you can use this command:

bash ./scripts/download_model.sh

Prediction/Test

For quick prediction you can run this command. If you don't have cuda/gpu please run the second command.

python test.py --input ./examples/my_small_data --output ./checkpoints/resultsfinal --remove 3 15

It will take sample images in the ./examples/my_small_data directory and will create and produce result in directory ./checkpoints/resultsfinal. You can replace these input /output directories with your desired ones. numbers after --remove specifies objects to be removed in the images. ABove command will remove 3(bird) and 15(people) from the images. Check segmentation-classes.txt for all removal options along with it's number.

Output images will all be 256x256. It takes around 10 minutes for 1000 images on NVIDIA GeForce GTX 1650

for better quality but slower runtime you can use this command

python test.py --input ./examples/my_small_data --output ./checkpoints/resultsfinal --remove 3 15 --cpu yes

It will run the segmentation model on cpu. It will be 5 times slower than on gpu (default) For other options including different segmentation model and EdgeConnect parameters to change please make corresponding modifications in .checkpoints/config.yml file

training

For training your own segmentation model you can refer to this repo and replace .src/segmentor_fcn.py with your model.

For training Edgeconnect model plaese refer to orignal EdgeConnect repo after training you can copy your model weights in .checkpoints/

some results

Next Steps

  • pretrained EdgeConnect models used in this project are trained on 256 x256 images. To make output images of the same size as input two approaches can be used. You can train your own Edgeconnect model on bigger images.Or you can create subimages of 256x256 for every object detected in the image and then merge them back together after passing through edgeconnect to reconstruct orignal sized image.Similar approach has been used in this repo
  • To detect object not present in segmentation classes , you can train your own segmentation model or you can use pretrained segmentation models from this repo, which has 150 different categories available.
  • It is also possible to combine opnecv's feature matching and edge prediction from EdgeConnect to highlight and create mask for relvant objects based on single mask created by user. I may try this part myself.

License

Licensed under a Creative Commons Attribution-NonCommercial 4.0 International.

Except where otherwise noted, this content is published under a CC BY-NC license, which means that you can copy, remix, transform and build upon the content as long as you do not use the material for commercial purposes and give appropriate credit and provide a link to the license.

Citation

@inproceedings{nazeri2019edgeconnect,
  title={EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning},
  author={Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal and Ebrahimi, Mehran},
  journal={arXiv preprint},
  year={2019},
}

@InProceedings{Nazeri_2019_ICCV,
  title = {EdgeConnect: Structure Guided Image Inpainting using Edge Prediction},
  author = {Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal and Ebrahimi, Mehran},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV) Workshops},
  month = {Oct},
  year = {2019}
}
Open Source Agenda is not affiliated with "Automated Objects Removal Inpainter" Project. README Source: sujaykhandekar/Automated-objects-removal-inpainter

Open Source Agenda Badge

Open Source Agenda Rating