High-Res Facial Appearance Capture from Polarized Smartphone Images
- URL: http://arxiv.org/abs/2212.01160v1
- Date: Fri, 2 Dec 2022 13:34:56 GMT
- Title: High-Res Facial Appearance Capture from Polarized Smartphone Images
- Authors: Dejan Azinovi\'c, Olivier Maury, Christophe Hery, Mathias Nie{\ss}ner
and Justus Thies
- Abstract summary: We propose a novel method for high-quality facial texture reconstruction from RGB images using a single smartphone.
We record two short sequences in a dark environment under flash illumination with different light polarization using the modified smartphone.
We then exploit the camera and light co-location within a differentiable structure to optimize the facial textures using an analysis-by-synthesis approach.
- Score: 11.885559856133339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method for high-quality facial texture reconstruction from
RGB images using a novel capturing routine based on a single smartphone which
we equip with an inexpensive polarization foil. Specifically, we turn the
flashlight into a polarized light source and add a polarization filter on top
of the camera. Leveraging this setup, we capture the face of a subject with
cross-polarized and parallel-polarized light. For each subject, we record two
short sequences in a dark environment under flash illumination with different
light polarization using the modified smartphone. Based on these observations,
we reconstruct an explicit surface mesh of the face using structure from
motion. We then exploit the camera and light co-location within a
differentiable renderer to optimize the facial textures using an
analysis-by-synthesis approach. Our method optimizes for high-resolution normal
textures, diffuse albedo, and specular albedo using a coarse-to-fine
optimization scheme. We show that the optimized textures can be used in a
standard rendering pipeline to synthesize high-quality photo-realistic 3D
digital humans in novel environments.
Related papers
- LightHeadEd: Relightable & Editable Head Avatars from a Smartphone [30.268643915885413]
We present a novel, cost-effective approach for creating high-quality relightable head avatars using only a smartphone equipped with polaroid filters.
Our approach involves simultaneously capturing cross-polarized and parallel-polarized video streams in a dark room with a single point-light source.
We introduce a hybrid representation that embeds 2D Gaussians in the UV space of a parametric head model, facilitating efficient real-time rendering while preserving high-fidelity geometric details.
arXiv Detail & Related papers (2025-04-13T17:51:56Z) - GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering [83.69136534797686]
We present GUS-IR, a novel framework designed to address the inverse rendering problem for complicated scenes featuring rough and glossy surfaces.
This paper starts by analyzing and comparing two prominent shading techniques popularly used for inverse rendering, forward shading and deferred shading.
We propose a unified shading solution that combines the advantages of both techniques for better decomposition.
arXiv Detail & Related papers (2024-11-12T01:51:05Z) - Environment Maps Editing using Inverse Rendering and Adversarial Implicit Functions [8.20594611891252]
editing High Dynamic Range environment maps using an inverse differentiable rendering architecture is a complex inverse problem.
We introduce a novel method for editing HDR environment maps using a differentiable rendering, addressing sparsity and variance between values.
Our approach can pave the way to interesting tasks, such as estimating a new environment map given a rendering with novel light sources.
arXiv Detail & Related papers (2024-10-24T10:27:29Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation [9.443561684223514]
We propose a method which combines a parametric light model with 360deg panoramas, ready to use as HDRI in rendering engines.
In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits.
arXiv Detail & Related papers (2023-04-26T00:20:59Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images [52.021529273866896]
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
arXiv Detail & Related papers (2022-04-05T14:14:18Z) - Deep Polarization Imaging for 3D shape and SVBRDF Acquisition [7.86578678811226]
We present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues.
Unlike previous works that have exploited polarization to estimate material or object appearance under certain constraints, we lift such restrictions by coupling polarization imaging with deep learning.
We demonstrate our approach to achieve superior results compared to recent works employing deep learning in conjunction with flash illumination.
arXiv Detail & Related papers (2021-05-06T17:58:43Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z) - Relightable 3D Head Portraits from a Smartphone Video [15.639140551193073]
We present a system for creating a relightable 3D portrait of a human head.
Our neural pipeline operates on a sequence of frames captured by a smartphone camera with the flash blinking.
A deep rendering network is trained to regress dense albedo, normals, and environmental lighting maps for arbitrary new viewpoints.
arXiv Detail & Related papers (2020-12-17T22:49:02Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.