Quaffure: Real-Time Quasi-Static Neural Hair Simulation
- URL: http://arxiv.org/abs/2412.10061v1
- Date: Fri, 13 Dec 2024 11:44:56 GMT
- Title: Quaffure: Real-Time Quasi-Static Neural Hair Simulation
- Authors: Tuur Stuyck, Gene Wei-Chin Lin, Egor Larionov, Hsiao-yu Chen, Aljaz Bozic, Nikolaos Sarafianos, Doug Roble,
- Abstract summary: We propose a novel neural approach to predict hair deformations that generalizes to various body poses, shapes, and hairstyles.
Our model is trained using a self-supervised loss, eliminating the need for expensive data generation and storage.
Our approach is highly suitable for real-time applications with an inference time of only a few milliseconds on consumer hardware.
- Score: 11.869362129320473
- License:
- Abstract: Realistic hair motion is crucial for high-quality avatars, but it is often limited by the computational resources available for real-time applications. To address this challenge, we propose a novel neural approach to predict physically plausible hair deformations that generalizes to various body poses, shapes, and hairstyles. Our model is trained using a self-supervised loss, eliminating the need for expensive data generation and storage. We demonstrate our method's effectiveness through numerous results across a wide range of pose and shape variations, showcasing its robust generalization capabilities and temporally smooth results. Our approach is highly suitable for real-time applications with an inference time of only a few milliseconds on consumer hardware and its ability to scale to predicting the drape of 1000 grooms in 0.3 seconds.
Related papers
- Hairmony: Fairness-aware hairstyle classification [10.230933455074634]
We present a method for prediction of a person's hairstyle from a single image.
We use only synthetic data to train our models.
We introduce a novel hairstyle taxonomy developed in collaboration with a diverse group of domain experts.
arXiv Detail & Related papers (2024-10-15T12:00:36Z) - Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-training via Differentiable Rendering of Line Segments [23.71057752711745]
In the film and gaming industries, achieving a realistic hair appearance typically involves the use of strands originating from the scalp.
In this study, we propose an optimization-based approach that eliminates the need for pre-training.
Our method exhibits robust and accurate inverse rendering, surpassing the quality of existing methods and significantly improving processing speed.
arXiv Detail & Related papers (2024-03-26T08:53:25Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - A Local Appearance Model for Volumetric Capture of Diverse Hairstyle [15.122893482253069]
Hair plays a significant role in personal identity and appearance, making it an essential component of high-quality, photorealistic avatars.
Existing approaches either focus on modeling the facial region only or rely on personalized models, limiting their generalizability and scalability.
We present a novel method for creating high-fidelity avatars with diverse hairstyles.
arXiv Detail & Related papers (2023-12-14T06:29:59Z) - Real-time volumetric rendering of dynamic humans [83.08068677139822]
We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos.
Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h.
A novel local ray marching rendering allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality.
arXiv Detail & Related papers (2023-03-21T14:41:25Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - MetaAvatar: Learning Animatable Clothed Human Models from Few Depth
Images [60.56518548286836]
To generate realistic cloth deformations from novel input poses, watertight meshes or dense full-body scans are usually needed as inputs.
We propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images.
arXiv Detail & Related papers (2021-06-22T17:30:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.