(ECCV 2022) Code for Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
Official PyTorch implementation of Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency (ECCV 2022). Check out our webpage for video results!
This repository contains:
@inproceedings{monnier2022unicorn,
title={{Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency}},
author={Monnier, Tom and Fisher, Matthew and Efros, Alexei A and Aubry, Mathieu},
booktitle={{ECCV}},
year={2022},
}
conda env create -f environment.yml
conda activate unicorn
git clone https://github.com/facebookresearch/visdom
cd visdom && pip install -e .
bash scripts/download_data.sh
This command will download one of the following datasets:
ShapeNet NMR
: paper / NMR
paper /
dataset
(33Go, thanks to the DVR
team for hosting
the data)CUB-200-2011
: paper /
webpage /
dataset
(1Go)Pascal3D+ Cars
: paper /
webpage (with ftp download link, 7.5Go) / UCMR
annotations
(bbox + train/test split, thanks to the UCMR team for hosting them) /
UNICORN annotations(3D shape ground-truth)CompCars
: paper /
webpage /
dataset
(12Go, thanks to the GIRAFFE team for
hosting the data)LSUN
: paper / webpage /
horse dataset (69Go) / moto
dataset (42Go)bash scripts/download_model.sh
We provide a small (200Mo) and a big (600Mo) version for each pretrained model (see training section for details). The command will download one of the following models:
car
trained on CompCars: car.pkl /
car_big.pkl
car_p3d
trained on Pascal3D+: car_p3d.pkl /
car_p3d_big.pkl
bird
trained on CUB: bird.pkl /
bird_big.pkl
moto
trained on LSUN Motorbike: moto.pkl /
moto_big.pkl
horse
trained on LSUN Horse: horse.pkl /
horse_big.pkl
sn_*
trained on each ShapeNet category:
airplane,
bench,
cabinet,
car,
chair,
display,
lamp,
phone,
rifle,
sofa,
speaker,
table,
vessel
sn_big_*
trained on each ShapeNet category:
airplane,
bench,
cabinet,
car,
chair,
display,
lamp,
phone,
rifle,
sofa,
speaker,
table,
vessel
gdown
hangs, if so you can download them manually with the gdrive links and move them to the models
folder.
You first need to download the car model (see above), then launch:
cuda=gpu_id model=car_big.pkl input=demo ./scripts/reconstruct.sh
where gpu_id
is a target cuda device id, car_big.pkl
corresponds to a pretrained model, demo
is a folder containing the target images.
Reconstruction results (.obj + gif) will be saved in a folder demo_rec
.
We also provide an interactive demo to reconstruct cars from single images.
To launch a training from scratch, run:
cuda=gpu_id config=filename.yml tag=run_tag ./scripts/pipeline.sh
where gpu_id
is a device id, filename.yml
is a config in configs
folder, run_tag
is a tag for the experiment.
Results are saved at runs/${DATASET}/${DATE}_${run_tag}
where DATASET
is the dataset name
specified in filename.yml
and DATE
is the current date in mmdd
format.
Available configs are:
sn/*.yml
, sn_big/*.yml
for each ShapeNet categorycar.yml
, car_big.yml
for CompCars datasetcub.yml
, cub_big.yml
for CUB-200 datasethorse.yml
, horse_big.yml
for LSUN Horse datasetmoto.yml
, horse_big.yml
for LSUN Motorbike datasetp3d_car.yml
, p3d_car_big.yml
for Pascal3D+ Car dataset:exclamation:NB: we advocate to always check the results after the first stage. In particular for cases like birds or horses, learning can fall into bad minima with bad prototypical shapes. If so, relaunch with a different seed.
We provide two configs to train a small and a big version of the model. Both versions give great results, the main benefit of the bigger model is slightly more detailed textures. The architecture differences are:
For faster experiments and prototyping, we advocate the training of the small version.
On a single GPU, the approximate training times are:
A model is evaluated at the end of training. To evaluate a pretrained model (e.g. sn_big_airplane.pkl
):
model.pkl
(e.g. in runs/shapenet_nmr/airplane_big
)resume: airplane_big
in airplane.yml
)cuda=gpu_id config=sn_big/airplane.yml tag=airplane_big_eval ./scripts/pipeline.sh
airplane | bench | cabinet | car | chair | display | lamp | phone | rifle | sofa | speaker | table | vessel | mean |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.110 | 0.159 | 0.137 | 0.168 | 0.253 | 0.220 | 0.523 | 0.127 | 0.097 | 0.192 | 0.224 | 0.243 | 0.155 | 0.201 |
For CUB, the built-in evaluation included in the training pipeline is Mask-IoU. To evaluate PCK, run:
cuda=gpu_id tag=run_tag ./scripts/kp_eval.sh
If you want to learn a model for a custom object category, here are the key things you need to do:
custom_name
folder inside the datasets
foldercustom.yml
(or custom_big.yml
) in the configs folder: this includes changing the dataset name to custom_name
and setting all training milestonescuda=gpu_id config=custom.yml tag=custom_run_tag ./scripts/pipeline.sh
If you like this project, check out related works from our group: