DeepPanoramaLighting Save

Deep Lighting Environment Map Estimation from Spherical Panoramas (CVPRW20)

Project README

Code accompanying the paper "Deep Lighting Environment Map Estimation from Spherical Panoramas", CVPRW 2020

Paper Conference Workshop Project Page YouTube

TODO

  • Pre-trained model.
  • Inference code.

Code and Trained Models

This repository contains inference code and models for the paper Deep Lighting Environment Map Estimation from Spherical Panoramas (link).

Requirements

The code is based on PyTorch and has been tested with Python 3.7 and CUDA 10.0. We recommend setting up a virtual environment (follow the virtualenv documentation) for installing PyTorch and the other necessary Python packages. Once your environment is set up and activated, install the necessary packages:

pip install torch===1.2.0 torchvision===0.4.0 -f https://download.pytorch.org/whl/torch_stable.html

Inference

You can download pre-trained models from here, which includes pre-trained LDR-to-HDR autoencoder and Lighting Encoder. Please put the extracted files under models and run:

python inference.py

The following flags specify the required parameters.

  • --input_path: Specifies the path of the input image.
  • --out_path: Specifies the file of the output path.
  • --deringing: Enable/disable low pass deringing filter for the predicted SH coefficients.

Open Source Agenda is not affiliated with "DeepPanoramaLighting" Project. README Source: VCL3D/DeepPanoramaLighting

Open Source Agenda Badge

Open Source Agenda Rating