Virtually remove a face mask to see what a person looks like underneath
Can you virtually remove a face mask to see what a person looks like underneath? Our Machine Learning team proves it’s possible via an image inpainting-based ML solution. Here is exactly how our engineers approached the problem — from the preconditions to the implementation, results and future improvements.
Check out the article for a more in-depth explanation
Examples of results (input | expected output | actual output)
Results can be replicated by following those steps:
tensorflow-gpu==2.2.0
in the environment.yml file.tensorflow==2.2.0
to tensorflow==2.0.0
in the environment.yml file.conda env create -f environment.yml
conda activate mask2face
mask2face.ipynb
that will download it automatically.You can configure the project using the configuration.json
. Some of the items are set up and should not be changed. However, changing some of the following items can be useful.
input_images_path
: Path to dataset with images that are input for the DataGenerator. If you want to use a different dataset than the default one, set the path to it here.train_data_path
: Path where are training images generated and where training algorithm is looking for training data.test_data_path
: Path where are testing images generated.train_image_count
: Number of training image pairs generated by DataGenerator.test_image_count
: Number of testing image pairs generated by DataGenerator.train_data_limit
: Number of training image pairs used for model training.test_data_limit
: Number of testing image pairs used for model testing.jupyter notebook
If you’re considering our help, you may be interested in our other past work—like the custom AI solution we built for Cinnamon in just four months. And if you’re a fellow engineer, please feel free to reach out to us with any questions or share your work results. We’re always happy to start up a discussion.