Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images
- URL: http://arxiv.org/abs/2207.14067v1
- Date: Thu, 28 Jul 2022 13:08:46 GMT
- Title: Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images
- Authors: Radu Alexandru Rosu, Shunsuke Saito, Ziyan Wang, Chenglei Wu, Sven
Behnke, Giljoo Nam
- Abstract summary: We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
- Score: 40.91569888920849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Neural Strands, a novel learning framework for modeling accurate
hair geometry and appearance from multi-view image inputs. The learned hair
model can be rendered in real-time from any viewpoint with high-fidelity
view-dependent effects. Our model achieves intuitive shape and style control
unlike volumetric counterparts. To enable these properties, we propose a novel
hair representation based on a neural scalp texture that encodes the geometry
and appearance of individual strands at each texel location. Furthermore, we
introduce a novel neural rendering framework based on rasterization of the
learned hair strands. Our neural rendering is strand-accurate and anti-aliased,
making the rendering view-consistent and photorealistic. Combining appearance
with a multi-view geometric prior, we enable, for the first time, the joint
learning of appearance and explicit hair geometry from a multi-view setup. We
demonstrate the efficacy of our approach in terms of fidelity and efficiency
for various hairstyles.
Related papers
- Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - Hair Color Digitization through Imaging and Deep Inverse Graphics [8.605763075773746]
We introduce a novel method for hair color digitization based on inverse graphics and deep neural networks.
Our proposed pipeline allows capturing the color appearance of a physical hair sample and renders synthetic images of hair with a similar appearance.
Our method is based on the combination of a controlled imaging device, a path-tracing rendering, and an inverse graphics model based on self-supervised machine learning.
arXiv Detail & Related papers (2022-02-08T08:57:04Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - Neural Head Avatars from Monocular RGB Videos [0.0]
We present a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar.
Our representation can be learned from a monocular RGB portrait video that features a range of different expressions and views.
arXiv Detail & Related papers (2021-12-02T19:01:05Z) - Intuitive, Interactive Beard and Hair Synthesis with Generative Models [38.93415643177721]
We present an interactive approach to synthesizing realistic variations in facial hair in images.
We employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second.
We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle.
arXiv Detail & Related papers (2020-04-15T01:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.