HairFormer: Transformer-Based Dynamic Neural Hair Simulation
- URL: http://arxiv.org/abs/2507.12600v1
- Date: Wed, 16 Jul 2025 19:42:08 GMT
- Title: HairFormer: Transformer-Based Dynamic Neural Hair Simulation
- Authors: Joy Xiaoji Zhang, Jingsen Zhu, Hanyu Chen, Steve Marschner,
- Abstract summary: We propose a Transformer-powered static network that predicts static draped shapes for any hairstyle.<n>A dynamic network with a novel cross-attention mechanism fuses static hair features with kinematic input to generate expressive dynamics.<n>Our method demonstrates high-fidelity and generalizable dynamic hair across various styles, guided by physics-informed losses.
- Score: 3.1157179526391374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulating hair dynamics that generalize across arbitrary hairstyles, body shapes, and motions is a critical challenge. Our novel two-stage neural solution is the first to leverage Transformer-based architectures for such a broad generalization. We propose a Transformer-powered static network that predicts static draped shapes for any hairstyle, effectively resolving hair-body penetrations and preserving hair fidelity. Subsequently, a dynamic network with a novel cross-attention mechanism fuses static hair features with kinematic input to generate expressive dynamics and complex secondary motions. This dynamic network also allows for efficient fine-tuning of challenging motion sequences, such as abrupt head movements. Our method offers real-time inference for both static single-frame drapes and dynamic drapes over pose sequences. Our method demonstrates high-fidelity and generalizable dynamic hair across various styles, guided by physics-informed losses, and can resolve penetrations even for complex, unseen long hairstyles, highlighting its broad generalization.
Related papers
- Neuralocks: Real-Time Dynamic Neural Hair Simulation [4.249827194545251]
The dynamic behavior of hair, such as bouncing or swaying in response to character movements like jumping or walking, plays a significant role in enhancing the overall realism and engagement of virtual experiences.<n>Current methods for simulating hair have been constrained by two primary approaches: highly optimized physics-based systems and neural methods.<n>This paper introduces a novel neural method that breaks through these limitations, achieving efficient and stable dynamic hair simulation.
arXiv Detail & Related papers (2025-07-07T16:49:19Z) - Dynamic Concepts Personalization from Single Videos [92.62863918003575]
We introduce Set-and-Sequence, a novel framework for personalizing generative video models with dynamic concepts.<n>Our approach imposes a-temporal weight space within an architecture that does not explicitly separate spatial and temporal features.<n>Our framework embeds dynamic concepts into the video model's output domain, enabling unprecedented editability and compositionality.
arXiv Detail & Related papers (2025-02-20T18:53:39Z) - Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics [48.99021224773799]
We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections.
We also propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images.
arXiv Detail & Related papers (2024-10-10T17:43:36Z) - GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians [41.52673678183542]
This paper presents GaussianHair, a novel explicit hair representation.
It enables comprehensive modeling of hair geometry and appearance from images, fostering innovative illumination effects and dynamic animation capabilities.
We further enhance this model with the "GaussianHair Scattering Model", adept at recreating the slender structure of hair strands and accurately capturing their local diffuse color in uniform lighting.
arXiv Detail & Related papers (2024-02-16T07:13:24Z) - NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and
Animation [23.625243364572867]
The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality.
We present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner.
Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal.
arXiv Detail & Related papers (2022-12-01T16:09:54Z) - PAD-Net: An Efficient Framework for Dynamic Networks [72.85480289152719]
Common practice in implementing dynamic networks is to convert the given static layers into fully dynamic ones.
We propose a partially dynamic network, namely PAD-Net, to transform the redundant dynamic parameters into static ones.
Our method is comprehensively supported by large-scale experiments with two typical advanced dynamic architectures.
arXiv Detail & Related papers (2022-11-10T12:42:43Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - Dance In the Wild: Monocular Human Animation with Neural Dynamic
Appearance Synthesis [56.550999933048075]
We propose a video based synthesis method that tackles challenges and demonstrates high quality results for in-the-wild videos.
We introduce a novel motion signature that is used to modulate the generator weights to capture dynamic appearance changes.
We evaluate our method on a set of challenging videos and show that our approach achieves state-of-the art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-11-10T20:18:57Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z) - Dynamic Neural Garments [45.833166320896716]
We present a solution that takes in body joint motion to directly produce realistic dynamic garment image sequences.
Specifically, given the target joint motion sequence of an avatar, we propose dynamic neural garments to jointly simulate and render plausible dynamic garment appearance.
arXiv Detail & Related papers (2021-02-23T17:21:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.