Header logo is

Learning Implicit Surface Light Fields


Conference Paper


Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.

Author(s): Michael Oechsle and Michael Niemeyer and Christian Reiser and Lars Mescheder and Thilo Strauss and Andreas Geiger
Book Title: International Conference on 3D Vision (3DV)
Year: 2020

Department(s): Autonomous Vision
Bibtex Type: Conference Paper (inproceedings)

Links: pdf
Project Page


  title = {Learning Implicit Surface Light Fields},
  author = {Oechsle, Michael and Niemeyer, Michael and Reiser, Christian and Mescheder, Lars and Strauss, Thilo and Geiger, Andreas},
  booktitle = {International Conference on 3D Vision (3DV)},
  year = {2020}