Multi-view 3D reconstruction using neural rendering. Unofficial implementation of UNISURF, VolSDF, NeuS and more.
Multi-view 3D reconstruction using neural rendering.
Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction
NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction
[VolSDF] Volume rendering of neural implicit surfaces
and more...
Trained with VolSDF@200k, with NeRF++ as background.
Above: :rocket: volume rendering of the scene (novel view synthesis)
Below: mesh extracted from the learned implicit shape
full-res video: (35 MiB, 15s@576x768@30fps) [click here] |
Trained with NeuS @300k, with NeRF++ as background.
The overall topic of the implemented papers is multi-view surface and appearance reconstruction from pure posed images.
What's known / Ground Truth / Supervision | What's learned |
---|---|
ONLY Multi-view posed RGB images. (no masks, no depths, no GT mesh or pointclouds, nothing.) | 3D surface / shape 3D appearance |
From one perspective, the implemented papers introduce volume rendering to 3D implicit surfaces to differentiably render views and reconstructing scenes using photometric reconstruction loss.
Rendering methods in previous surface reconstruction approach | Rendering method in this repo (when training) |
---|---|
Surface rendering | Volume rendering |
The benefit of using volume rendering is that it diffuses gradients widely in space, and can efficiently learns a roughly correct shape at the very early beginning of training without mask supervision, avoiding bad local minimas when learning shapes, which is often encountered when using surface rendering even with mask supervision.
config: [click me] |
@0 iters | @3k iters @16 mins |
@10k iters @1 hours |
@200k iters @ 18.5 hours |
---|---|---|---|---|
Mesh extracted from the learned shape | ||||
View rendered from the learned appearance |
From another perspective, they change the original NeRF's shape representation (volume density $\sigma$) to a 3D implicit surface model, whose iso-surface is defined to represent spatial surfaces.
Shape representation in NeRF | Shape representation in this repo |
---|---|
Volume density | Occupancy net (UNISURF) DeepSDF (VolSDF/NeuS) |
The biggest disadvantage of NeRF's shape representation is that it considers objects as volume clouds, which actually does not guarantees an exact surface, since there is no constraint on the learned density.
Representing shapes with implicit surfaces can force the volume density to be associated with a exact surface.
What's more, the association (or, the mapping function that maps implicit surface value to volume density) can be controlled either manually or by learn-able parameters, allowing the shape representation to be more surface-like or more volume-like, meeting different needs of different training stages.
Demonstration of controllable mappings from sdf value to volume density value. @VolSDF |
Hence, the training scheme of approaches in this repo can be roughly divided as follows (not discrete stages, continuously progressing instead):
You can see that as the controlling parameter let narrower and narrower neighboring points being considered during volume rendering, the rendered results are getting almost equivalent to surface rendering. This is proved in UNISURF, and also proved with results showed in the section [docs/usage.md#use surface rendering instead of volume rendering].
Currently, the biggest problem of methods contained in this repo is that the view-dependent reflection effect is baked into the object's surface, similar with IDR, NeRF and so on. In other words, if you place the learned object into a new scene with different ambient light settings, the rendering process will have no consideration of the new scene's light condition, and keeps the ambient light's reflection of the old trained scene with it.
However, as the combination of implicit surface with NeRF has come true, ambient light and material decomposition can be much easier for NeRF-based frameworks, since now shapes are represented by the underlying neural surface instead of volume densities.
The trained models are stored in [GoogleDrive] / [Baidu, code: reco
].
For more visualization of more trained results, see [docs/trained_models_results.md].
See [docs/usage.md] for detailed usage documentation.
[docs/neus.md] Notes on the unbiased
property of NeuS.
[docs/volsdf.md] Notes on the error bound
and up sampline algorithm
of VolSDF.
[click here] (in Chinese) My personal notes 我的个人笔记
NeuS
VolSDF
UNISURF
general
@article{oechsle2021unisurf,
title={Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction},
author={Oechsle, Michael and Peng, Songyou and Geiger, Andreas},
journal={arXiv preprint arXiv:2104.10078},
year={2021}
}
@article{wang2021neus,
title={NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction},
author={Wang, Peng and Liu, Lingjie and Liu, Yuan and Theobalt, Christian and Komura, Taku and Wang, Wenping},
journal={arXiv preprint arXiv:2106.10689},
year={2021}
}
@article{yariv2021volume,
title={Volume Rendering of Neural Implicit Surfaces},
author={Yariv, Lior and Gu, Jiatao and Kasten, Yoni and Lipman, Yaron},
journal={arXiv preprint arXiv:2106.12052},
year={2021}
}
@article{kaizhang2020,
author = {Kai Zhang and Gernot Riegler and Noah Snavely and Vladlen Koltun},
title = {NeRF++: Analyzing and Improving Neural Radiance Fields},
journal = {arXiv:2010.07492},
year = {2020},
}
@inproceedings{sitzmann2019siren,
author = {Sitzmann, Vincent and Martel, Julien N.P. and Bergman, Alexander W. and Lindell, David B. and Wetzstein, Gordon},
title = {Implicit Neural Representations with Periodic Activation Functions},
booktitle = {Proc. NeurIPS},
year={2020}
}
This repository modifies codes or draws inspiration from:
My another NeRF-- repo: https://github.com/ventusff/improved-nerfmm
https://github.com/autonomousvision/differentiable_volumetric_rendering
Feel free to submit issues or contact Jianfei Guo(郭建非) guojianfei [at] pjlab.org.cn
.
PRs are also very welcome :smiley:
🎉🎉🎉
On behalf of Intelligent Transportation and Auto Driving Group in Shanghai AI Lab, we are hiring researcher/engineer/full-time intern for Computer Graphics and 3D Rendering Algorithm (base in Shanghai)
上海人工智能实验室智慧交通与自动驾驶团队招聘「图形学算法研究员」和「3D场景生成研究员」。实习、校招、社招均有海量HC。
对以上两个岗位感兴趣的同学请发送简历到 shibotian [at] pjlab.org.cn
, guojianfei [at] pjlab.org.cn
。标题务必包含「应聘」两个字,谢谢。
上海人工智能实验室是中国人工智能领域的新型科研机构,由汤晓鸥、姚期智、陈杰等多位世界人工智能领域知名学者领衔发起成立,于2020年7月在世界人工智能大会正式揭牌。
实验室研究团队由一流科学家和团队按新机制组建。并开展战略性、原创性、前瞻性的科学研究与技术攻关,突破人工智能的重要基础理论和关键核心技术,打造“突破型、引领型、平台型”一体化的大型综合性研究基地,支撑中国人工智能产业实现跨越式发展,目标建成国际一流的人工智能实验室,成为享誉全球的人工智能原创理论和技术的策源地。
实验室先后与上海交通大学、复旦大学、浙江大学、中国科学技术大学、香港中文大学、同济大学、华东师范大学等知名高校签订战略合作协议,建立科研人员双聘和职称互认机制,汇聚国内国际优势资源,探索建立创新型的评价考核制度和具有国际竞争力的薪酬体系及条件保障。
上海人工智能实验室官方网站:https://www.shlab.org.cn/
智能交通与自动驾驶研发岗位招聘:https://www.shlab.org.cn/news/5443060