Real-Time Cleaning and Refinement of Facial Animation Signals
- URL: http://arxiv.org/abs/2008.01332v1
- Date: Tue, 4 Aug 2020 05:21:02 GMT
- Title: Real-Time Cleaning and Refinement of Facial Animation Signals
- Authors: Elo\"ise Berson, Catherine Soladi\'e, Nicolas Stoiber
- Abstract summary: We propose a real-time animation refining system that preserves -- or even restores -- the natural dynamics of facial motions.
We leverage an off-the-shelf recurrent neural network architecture that learns proper facial dynamics patterns on clean animation data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing demand for real-time animated 3D content in the
entertainment industry and beyond, performance-based animation has garnered
interest among both academic and industrial communities. While recent solutions
for motion-capture animation have achieved impressive results, handmade
post-processing is often needed, as the generated animations often contain
artifacts. Existing real-time motion capture solutions have opted for standard
signal processing methods to strengthen temporal coherence of the resulting
animations and remove inaccuracies. While these methods produce smooth results,
they inherently filter-out part of the dynamics of facial motion, such as high
frequency transient movements. In this work, we propose a real-time animation
refining system that preserves -- or even restores -- the natural dynamics of
facial motions. To do so, we leverage an off-the-shelf recurrent neural network
architecture that learns proper facial dynamics patterns on clean animation
data. We parametrize our system using the temporal derivatives of the signal,
enabling our network to process animations at any framerate. Qualitative
results show that our system is able to retrieve natural motion signals from
noisy or degraded input animation.
Related papers
- PhysAnimator: Physics-Guided Generative Cartoon Animation [19.124321553546242]
We introduce PhysAnimator, a novel approach for generating anime-stylized animation from static anime illustrations.
To capture the fluidity and exaggeration characteristic of anime, we perform image-space deformable body simulations on extracted mesh geometries.
We extract and warp sketches from the simulation sequence, generating a texture-agnostic representation, and employ a sketch-guided video diffusion model to synthesize high-quality animation frames.
arXiv Detail & Related papers (2025-01-27T22:48:36Z) - X-Dyna: Expressive Dynamic Human Image Animation [49.896933584815926]
X-Dyna is a zero-shot, diffusion-based pipeline for animating a single human image.
It generates realistic, context-aware dynamics for both the subject and the surrounding environment.
arXiv Detail & Related papers (2025-01-17T08:10:53Z) - Motion Prompting: Controlling Video Generation with Motion Trajectories [57.049252242807874]
We train a video generation model conditioned on sparse or dense video trajectories.
We translate high-level user requests into detailed, semi-dense motion prompts.
We demonstrate our approach through various applications, including camera and object motion control, "interacting" with an image, motion transfer, and image editing.
arXiv Detail & Related papers (2024-12-03T18:59:56Z) - MagicAnimate: Temporally Consistent Human Image Animation using
Diffusion Model [74.84435399451573]
This paper studies the human image animation task, which aims to generate a video of a certain reference identity following a particular motion sequence.
Existing animation works typically employ the frame-warping technique to animate the reference image towards the target motion.
We introduce MagicAnimate, a diffusion-based framework that aims at enhancing temporal consistency, preserving reference image faithfully, and improving animation fidelity.
arXiv Detail & Related papers (2023-11-27T18:32:31Z) - AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance [13.416296247896042]
We introduce an open domain image animation method that leverages the motion prior of video diffusion model.
Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control of the movable area and its motion speed.
We validate the effectiveness of our method through rigorous experiments on an open-domain dataset.
arXiv Detail & Related papers (2023-11-21T03:47:54Z) - CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior [27.989344587876964]
Speech-driven 3D facial animation has been widely studied, yet there is still a gap to achieving realism and vividness.
We propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook.
We demonstrate that our approach outperforms current state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-01-06T05:04:32Z) - Towards Lightweight Neural Animation : Exploration of Neural Network
Pruning in Mixture of Experts-based Animation Models [3.1733862899654652]
We apply pruning algorithms to compress a neural network in the context of interactive character animation.
With the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model.
This work demonstrates that, with the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model.
arXiv Detail & Related papers (2022-01-11T16:39:32Z) - MeshTalk: 3D Face Animation from Speech using Cross-Modality
Disentanglement [142.9900055577252]
We propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.
Our approach ensures highly accurate lip motion, while also plausible animation of the parts of the face that are uncorrelated to the audio signal, such as eye blinks and eye brow motion.
arXiv Detail & Related papers (2021-04-16T17:05:40Z) - High-Fidelity Neural Human Motion Transfer from Monocular Video [71.75576402562247]
Video-based human motion transfer creates video animations of humans following a source motion.
We present a new framework which performs high-fidelity and temporally-consistent human motion transfer with natural pose-dependent non-rigid deformations.
In the experimental results, we significantly outperform the state-of-the-art in terms of video realism.
arXiv Detail & Related papers (2020-12-20T16:54:38Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z) - A Robust Interactive Facial Animation Editing System [0.0]
We propose a new learning-based approach to easily edit a facial animation from a set of intuitive control parameters.
We use a resolution-preserving fully convolutional neural network that maps control parameters to blendshapes coefficients sequences.
The proposed system is robust and can handle coarse, exaggerated edits from non-specialist users.
arXiv Detail & Related papers (2020-07-18T08:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.