Automatic Animation of Hair Blowing in Still Portrait Photos
- URL: http://arxiv.org/abs/2309.14207v1
- Date: Mon, 25 Sep 2023 15:11:40 GMT
- Title: Automatic Animation of Hair Blowing in Still Portrait Photos
- Authors: Wenpeng Xiao, Wentao Liu, Yitong Wang, Bernard Ghanem, Bing Li
- Abstract summary: We propose a novel approach to animate human hair in a still portrait photo.
Considering the complexity of hair structure, we innovatively treat hair wisp extraction as an instance segmentation problem.
We propose a wisp-aware animation module that animates hair wisps with pleasing motions without noticeable artifacts.
- Score: 61.54919805051212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel approach to animate human hair in a still portrait photo.
Existing work has largely studied the animation of fluid elements such as water
and fire. However, hair animation for a real image remains underexplored, which
is a challenging problem, due to the high complexity of hair structure and
dynamics. Considering the complexity of hair structure, we innovatively treat
hair wisp extraction as an instance segmentation problem, where a hair wisp is
referred to as an instance. With advanced instance segmentation networks, our
method extracts meaningful and natural hair wisps. Furthermore, we propose a
wisp-aware animation module that animates hair wisps with pleasing motions
without noticeable artifacts. The extensive experiments show the superiority of
our method. Our method provides the most pleasing and compelling viewing
experience in the qualitative experiments and outperforms state-of-the-art
still-image animation methods by a large margin in the quantitative evaluation.
Project url: \url{https://nevergiveu.github.io/AutomaticHairBlowing/}
Related papers
- AnimateAnywhere: Rouse the Background in Human Image Animation [50.737139810172465]
AnimateAnywhere is a framework to rousing the background in human image animation without requirements on camera trajectories.
We introduce a background motion learner (BML) to learn background motions from human pose sequences.
Experiments demonstrate that our AnimateAnywhere effectively learns the background motion from human pose sequences.
arXiv Detail & Related papers (2025-04-28T14:35:01Z) - Learning to Animate Images from A Few Videos to Portray Delicate Human Actions [80.61838364885482]
Video generative models still struggle to animate static images into videos that portray delicate human actions.
In this paper, we explore the task of learning to animate images to portray delicate human actions using a small number of videos.
We propose FLASH, which learns generalizable motion patterns by forcing the model to reconstruct a video using the motion features and cross-frame correspondences of another video.
arXiv Detail & Related papers (2025-03-01T01:09:45Z) - PhysAnimator: Physics-Guided Generative Cartoon Animation [19.124321553546242]
PhysAnimator is a novel approach for generating anime-stylized animation from static anime illustrations.
To capture the fluidity and exaggeration characteristic of anime, we perform image-space deformable body simulations on extracted mesh geometries.
We extract and warp sketches from the simulation sequence, generating a texture-agnostic representation, and employ a sketch-guided video diffusion model to synthesize high-quality animation frames.
arXiv Detail & Related papers (2025-01-27T22:48:36Z) - EgoAvatar: Egocentric View-Driven and Photorealistic Full-body Avatars [56.56236652774294]
We propose a person-specific egocentric telepresence approach, which jointly models the photoreal digital avatar while also driving it from a single egocentric video.
Our experiments demonstrate a clear step towards egocentric and photoreal telepresence as our method outperforms baselines as well as competing methods.
arXiv Detail & Related papers (2024-09-22T22:50:27Z) - GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians [41.52673678183542]
This paper presents GaussianHair, a novel explicit hair representation.
It enables comprehensive modeling of hair geometry and appearance from images, fostering innovative illumination effects and dynamic animation capabilities.
We further enhance this model with the "GaussianHair Scattering Model", adept at recreating the slender structure of hair strands and accurately capturing their local diffuse color in uniform lighting.
arXiv Detail & Related papers (2024-02-16T07:13:24Z) - AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance [13.416296247896042]
We introduce an open domain image animation method that leverages the motion prior of video diffusion model.
Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control of the movable area and its motion speed.
We validate the effectiveness of our method through rigorous experiments on an open-domain dataset.
arXiv Detail & Related papers (2023-11-21T03:47:54Z) - DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors [63.43133768897087]
We propose a method to convert open-domain images into animated videos.
The key idea is to utilize the motion prior to text-to-video diffusion models by incorporating the image into the generative process as guidance.
Our proposed method can produce visually convincing and more logical & natural motions, as well as higher conformity to the input image.
arXiv Detail & Related papers (2023-10-18T14:42:16Z) - Compositional 3D Human-Object Neural Animation [93.38239238988719]
Human-object interactions (HOIs) are crucial for human-centric scene understanding applications such as human-centric visual generation, AR/VR, and robotics.
In this paper, we address this challenge in HOI animation from a compositional perspective.
We adopt neural human-object deformation to model and render HOI dynamics based on implicit neural representations.
arXiv Detail & Related papers (2023-04-27T10:04:56Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and
Animation [23.625243364572867]
The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality.
We present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner.
Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal.
arXiv Detail & Related papers (2022-12-01T16:09:54Z) - QS-Craft: Learning to Quantize, Scrabble and Craft for Conditional Human
Motion Animation [66.97112599818507]
This paper studies the task of conditional Human Motion Animation (cHMA)
Given a source image and a driving video, the model should animate the new frame sequence.
The key novelties come from the newly introduced three key steps: quantize, scrabble and craft.
arXiv Detail & Related papers (2022-03-22T11:34:40Z) - Hair Color Digitization through Imaging and Deep Inverse Graphics [8.605763075773746]
We introduce a novel method for hair color digitization based on inverse graphics and deep neural networks.
Our proposed pipeline allows capturing the color appearance of a physical hair sample and renders synthetic images of hair with a similar appearance.
Our method is based on the combination of a controlled imaging device, a path-tracing rendering, and an inverse graphics model based on self-supervised machine learning.
arXiv Detail & Related papers (2022-02-08T08:57:04Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.