CHARM: Control-point-based 3D Anime Hairstyle Auto-Regressive Modeling
- URL: http://arxiv.org/abs/2509.21114v1
- Date: Thu, 25 Sep 2025 13:00:38 GMT
- Title: CHARM: Control-point-based 3D Anime Hairstyle Auto-Regressive Modeling
- Authors: Yuze He, Yanning Zhou, Wang Zhao, Jingwen Ye, Yushi Bai, Kaiwen Xiao, Yong-Jin Liu, Zhongqian Sun, Wei Yang,
- Abstract summary: We present CHARM, a novel parametric representation and generative framework for anime hairstyle modeling.<n> CHARM introduces a compact, invertible control-point-based parameterization, where a sequence of control points represents each hair card.<n>Built upon this representation, CHARM introduces an autoregressive generative framework that effectively generates anime hairstyles from input images or point clouds.
- Score: 43.54249989719103
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present CHARM, a novel parametric representation and generative framework for anime hairstyle modeling. While traditional hair modeling methods focus on realistic hair using strand-based or volumetric representations, anime hairstyle exhibits highly stylized, piecewise-structured geometry that challenges existing techniques. Existing works often rely on dense mesh modeling or hand-crafted spline curves, making them inefficient for editing and unsuitable for scalable learning. CHARM introduces a compact, invertible control-point-based parameterization, where a sequence of control points represents each hair card, and each point is encoded with only five geometric parameters. This efficient and accurate representation supports both artist-friendly design and learning-based generation. Built upon this representation, CHARM introduces an autoregressive generative framework that effectively generates anime hairstyles from input images or point clouds. By interpreting anime hairstyles as a sequential "hair language", our autoregressive transformer captures both local geometry and global hairstyle topology, resulting in high-fidelity anime hairstyle creation. To facilitate both training and evaluation of anime hairstyle generation, we construct AnimeHair, a large-scale dataset of 37K high-quality anime hairstyles with separated hair cards and processed mesh data. Extensive experiments demonstrate state-of-the-art performance of CHARM in both reconstruction accuracy and generation quality, offering an expressive and scalable solution for anime hairstyle modeling. Project page: https://hyzcluster.github.io/charm/
Related papers
- Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars [60.99229760565975]
We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization.<n>We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images.
arXiv Detail & Related papers (2025-09-01T13:38:08Z) - DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models [53.08138861924767]
We propose DiffLocks, a novel framework that enables reconstruction of a wide variety of hairstyles directly from a single image.<n>First, we address the lack of 3D hair data by automating the creation of the largest synthetic hair dataset to date, containing 40K hairstyles.<n>By using a pretrained image backbone, our method generalizes to in-the-wild images despite being trained only on synthetic data.
arXiv Detail & Related papers (2025-05-09T16:16:42Z) - TANGLED: Generating 3D Hair Strands from Images with Arbitrary Styles and Viewpoints [38.95048174663582]
Existing text or image-guided generation methods fail to handle the richness and complexity of diverse styles.<n>We present TANGLED, a novel approach for 3D hair strand generation that accommodates diverse image inputs across styles, viewpoints, and quantities of input views.
arXiv Detail & Related papers (2025-02-10T12:26:02Z) - Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - Perm: A Parametric Representation for Multi-Style 3D Hair Modeling [22.790597419351528]
Perm is a learned parametric representation of human 3D hair designed to facilitate various hair-related applications.<n>We leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures.
arXiv Detail & Related papers (2024-07-28T10:05:11Z) - GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians [41.52673678183542]
This paper presents GaussianHair, a novel explicit hair representation.
It enables comprehensive modeling of hair geometry and appearance from images, fostering innovative illumination effects and dynamic animation capabilities.
We further enhance this model with the "GaussianHair Scattering Model", adept at recreating the slender structure of hair strands and accurately capturing their local diffuse color in uniform lighting.
arXiv Detail & Related papers (2024-02-16T07:13:24Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.