HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles
- URL: http://arxiv.org/abs/2312.11666v1
- Date: Mon, 18 Dec 2023 19:19:32 GMT
- Title: HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles
- Authors: Vanessa Sklyarova, Egor Zakharov, Otmar Hilliges, Michael J. Black and
Justus Thies
- Abstract summary: We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
- Score: 85.12672855502517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present HAAR, a new strand-based generative model for 3D human hairstyles.
Specifically, based on textual inputs, HAAR produces 3D hairstyles that could
be used as production-level assets in modern computer graphics engines. Current
AI-based generative models take advantage of powerful 2D priors to reconstruct
3D content in the form of point clouds, meshes, or volumetric functions.
However, by using the 2D priors, they are intrinsically limited to only
recovering the visual parts. Highly occluded hair structures can not be
reconstructed with those methods, and they only model the ''outer shell'',
which is not ready to be used in physics-based rendering or simulation
pipelines. In contrast, we propose a first text-guided generative method that
uses 3D hair strands as an underlying representation. Leveraging 2D visual
question-answering (VQA) systems, we automatically annotate synthetic hair
models that are generated from a small set of artist-created hairstyles. This
allows us to train a latent diffusion model that operates in a common hairstyle
UV space. In qualitative and quantitative studies, we demonstrate the
capabilities of the proposed model and compare it to existing hairstyle
generation approaches.
Related papers
- Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - Perm: A Parametric Representation for Multi-Style 3D Hair Modeling [22.790597419351528]
Perm is a learned parametric model of human 3D hair designed to facilitate various hair-related applications.
We propose to disentangle the global hair shape and local strand details using a PCA-based strand representation in the frequency domain.
These textures are later parameterized with different generative models, emulating common stages in the hair modeling process.
arXiv Detail & Related papers (2024-07-28T10:05:11Z) - Breathing New Life into 3D Assets with Generative Repainting [74.80184575267106]
Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators.
Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields.
We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools.
Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, and orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools.
arXiv Detail & Related papers (2023-09-15T16:34:51Z) - NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and
Animation [23.625243364572867]
The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality.
We present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner.
Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal.
arXiv Detail & Related papers (2022-12-01T16:09:54Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image
Using Implicit Neural Representations [40.14104266690989]
We introduce NeuralHDHair, a flexible, fully automatic system for modeling high-fidelity hair from a single image.
We propose a novel voxel-aligned implicit function (VIFu) to represent the global hair feature.
To improve the efficiency of a traditional hair growth algorithm, we adopt a local neural implicit function to grow strands based on the estimated 3D hair geometric features.
arXiv Detail & Related papers (2022-05-09T10:39:39Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - i3DMM: Deep Implicit 3D Morphable Model of Human Heads [115.19943330455887]
We present the first deep implicit 3D morphable model (i3DMM) of full heads.
It not only captures identity-specific geometry, texture, and expressions of the frontal face, but also models the entire head, including hair.
We show the merits of i3DMM using ablation studies, comparisons to state-of-the-art models, and applications such as semantic head editing and texture transfer.
arXiv Detail & Related papers (2020-11-28T15:01:53Z) - Lifting 2D StyleGAN for 3D-Aware Face Generation [52.8152883980813]
We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation.
Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for synthetic images.
arXiv Detail & Related papers (2020-11-26T05:02:09Z) - SMPLpix: Neural Avatars from 3D Human Models [56.85115800735619]
We bridge the gap between classic rendering and the latest generative networks operating in pixel space.
We train a network that directly converts a sparse set of 3D mesh vertices into photorealistic images.
We show the advantage over conventional differentiables both in terms of the level of photorealism and rendering efficiency.
arXiv Detail & Related papers (2020-08-16T10:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.