PixHt-Lab: Pixel Height Based Light Effect Generation for Image
Compositing
- URL: http://arxiv.org/abs/2303.00137v1
- Date: Tue, 28 Feb 2023 23:52:01 GMT
- Title: PixHt-Lab: Pixel Height Based Light Effect Generation for Image
Compositing
- Authors: Yichen Sheng, Jianming Zhang, Julien Philip, Yannick Hold-Geoffroy,
Xin Sun, HE Zhang, Lu Ling, Bedrich Benes
- Abstract summary: Lighting effects such as shadows or reflections are key in making synthetic images realistic and visually appealing.
To generate such effects, traditional computer graphics uses a physically-based along with 3D geometry.
Recent deep learning-based approaches introduced a pixel height representation to generate soft shadows and reflections.
We introduce PixHt-Lab, a system leveraging an explicit mapping from pixel height representation to 3D space.
- Score: 34.76980642388534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lighting effects such as shadows or reflections are key in making synthetic
images realistic and visually appealing. To generate such effects, traditional
computer graphics uses a physically-based renderer along with 3D geometry. To
compensate for the lack of geometry in 2D Image compositing, recent deep
learning-based approaches introduced a pixel height representation to generate
soft shadows and reflections. However, the lack of geometry limits the quality
of the generated soft shadows and constrain reflections to pure specular ones.
We introduce PixHt-Lab, a system leveraging an explicit mapping from pixel
height representation to 3D space. Using this mapping, PixHt-Lab reconstructs
both the cutout and background geometry and renders realistic, diverse,
lighting effects for image compositing. Given a surface with physically-based
materials, we can render reflections with varying glossiness. To generate more
realistic soft shadows, we further propose to use 3D-aware buffer channels to
guide a neural renderer. Both quantitative and qualitative evaluations
demonstrate that PixHt-Lab significantly improves soft shadow generation.
Related papers
- FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - 3D Scene Creation and Rendering via Rough Meshes: A Lighting Transfer Avenue [49.62477229140788]
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering.
We propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other.
arXiv Detail & Related papers (2022-11-27T13:31:00Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.