TANGLED: Generating 3D Hair Strands from Images with Arbitrary Styles and Viewpoints
- URL: http://arxiv.org/abs/2502.06392v1
- Date: Mon, 10 Feb 2025 12:26:02 GMT
- Title: TANGLED: Generating 3D Hair Strands from Images with Arbitrary Styles and Viewpoints
- Authors: Pengyu Long, Zijun Zhao, Min Ouyang, Qingcheng Zhao, Qixuan Zhang, Wei Yang, Lan Xu, Jingyi Yu,
- Abstract summary: Existing text or image-guided generation methods fail to handle the richness and complexity of diverse styles.
We present TANGLED, a novel approach for 3D hair strand generation that accommodates diverse image inputs across styles, viewpoints, and quantities of input views.
- Score: 38.95048174663582
- License:
- Abstract: Hairstyles are intricate and culturally significant with various geometries, textures, and structures. Existing text or image-guided generation methods fail to handle the richness and complexity of diverse styles. We present TANGLED, a novel approach for 3D hair strand generation that accommodates diverse image inputs across styles, viewpoints, and quantities of input views. TANGLED employs a three-step pipeline. First, our MultiHair Dataset provides 457 diverse hairstyles annotated with 74 attributes, emphasizing complex and culturally significant styles to improve model generalization. Second, we propose a diffusion framework conditioned on multi-view linearts that can capture topological cues (e.g., strand density and parting lines) while filtering out noise. By leveraging a latent diffusion model with cross-attention on lineart features, our method achieves flexible and robust 3D hair generation across diverse input conditions. Third, a parametric post-processing module enforces braid-specific constraints to maintain coherence in complex structures. This framework not only advances hairstyle realism and diversity but also enables culturally inclusive digital avatars and novel applications like sketch-based 3D strand editing for animation and augmented reality.
Related papers
- Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and Texture Generation [58.77520205498394]
This report presents a comprehensive framework for generating high-quality 3D shapes and textures from diverse input prompts.
The framework consists of 3D shape generation and texture generation.
This report details the system architecture, experimental results, and potential future directions to improve and expand the framework.
arXiv Detail & Related papers (2025-02-20T04:22:30Z) - Towards Unified 3D Hair Reconstruction from Single-View Portraits [27.404011546957104]
We propose a novel strategy to enable single-view 3D reconstruction for a variety of hair types via a unified pipeline.
Our experiments demonstrate that reconstructing braided and un-braided 3D hair from single-view images via a unified approach is possible.
arXiv Detail & Related papers (2024-09-25T12:21:31Z) - MonoHair: High-Fidelity Hair Modeling from a Monocular Video [40.27026803872373]
MonoHair is a generic framework to achieve high-fidelity hair reconstruction from a monocular video.
Our approach bifurcates the hair modeling process into two main stages: precise exterior reconstruction and interior structure inference.
Our experiments demonstrate that our method exhibits robustness across diverse hairstyles and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-03-27T08:48:47Z) - GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians [41.52673678183542]
This paper presents GaussianHair, a novel explicit hair representation.
It enables comprehensive modeling of hair geometry and appearance from images, fostering innovative illumination effects and dynamic animation capabilities.
We further enhance this model with the "GaussianHair Scattering Model", adept at recreating the slender structure of hair strands and accurately capturing their local diffuse color in uniform lighting.
arXiv Detail & Related papers (2024-02-16T07:13:24Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - A Local Appearance Model for Volumetric Capture of Diverse Hairstyle [15.122893482253069]
Hair plays a significant role in personal identity and appearance, making it an essential component of high-quality, photorealistic avatars.
Existing approaches either focus on modeling the facial region only or rely on personalized models, limiting their generalizability and scalability.
We present a novel method for creating high-fidelity avatars with diverse hairstyles.
arXiv Detail & Related papers (2023-12-14T06:29:59Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images [40.91569888920849]
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
arXiv Detail & Related papers (2022-07-28T13:08:46Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.