Hair Color Digitization through Imaging and Deep Inverse Graphics
- URL: http://arxiv.org/abs/2202.03723v1
- Date: Tue, 8 Feb 2022 08:57:04 GMT
- Title: Hair Color Digitization through Imaging and Deep Inverse Graphics
- Authors: Robin Kips, Panagiotis-Alexandros Bokaris, Matthieu Perrot, Pietro
Gori, Isabelle Bloch
- Abstract summary: We introduce a novel method for hair color digitization based on inverse graphics and deep neural networks.
Our proposed pipeline allows capturing the color appearance of a physical hair sample and renders synthetic images of hair with a similar appearance.
Our method is based on the combination of a controlled imaging device, a path-tracing rendering, and an inverse graphics model based on self-supervised machine learning.
- Score: 8.605763075773746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hair appearance is a complex phenomenon due to hair geometry and how the
light bounces on different hair fibers. For this reason, reproducing a specific
hair color in a rendering environment is a challenging task that requires
manual work and expert knowledge in computer graphics to tune the result
visually. While current hair capture methods focus on hair shape estimation
many applications could benefit from an automated method for capturing the
appearance of a physical hair sample, from augmented/virtual reality to hair
dying development. Building on recent advances in inverse graphics and material
capture using deep neural networks, we introduce a novel method for hair color
digitization. Our proposed pipeline allows capturing the color appearance of a
physical hair sample and renders synthetic images of hair with a similar
appearance, simulating different hair styles and/or lighting environments.
Since rendering realistic hair images requires path-tracing rendering, the
conventional inverse graphics approach based on differentiable rendering is
untractable. Our method is based on the combination of a controlled imaging
device, a path-tracing renderer, and an inverse graphics model based on
self-supervised machine learning, which does not require to use differentiable
rendering to be trained. We illustrate the performance of our hair digitization
method on both real and synthetic images and show that our approach can
accurately capture and render hair color.
Related papers
- HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction [4.714310894654027]
This work proposes an approach capable of accurate hair geometry reconstruction at a strand level from a monocular video or multi-view images captured in uncontrolled conditions.
The combined system, named Neural Haircut, achieves high realism and personalization of the reconstructed hairstyles.
arXiv Detail & Related papers (2023-06-09T13:08:34Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images [40.91569888920849]
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
arXiv Detail & Related papers (2022-07-28T13:08:46Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - SketchHairSalon: Deep Sketch-based Hair Image Synthesis [36.79413744626908]
We present a framework for generating realistic hair images directly from freehand sketches depicting desired hair structure and appearance.
Based on the trained networks and the two sketch completion strategies, we build an intuitive interface to allow even novice users to design visually pleasing hair images.
arXiv Detail & Related papers (2021-09-16T11:14:01Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z) - Intuitive, Interactive Beard and Hair Synthesis with Generative Models [38.93415643177721]
We present an interactive approach to synthesizing realistic variations in facial hair in images.
We employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second.
We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle.
arXiv Detail & Related papers (2020-04-15T01:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.