Real-Time Cleaning and Refinement of Facial Animation Signals
- URL: http://arxiv.org/abs/2008.01332v1
- Date: Tue, 4 Aug 2020 05:21:02 GMT
- Title: Real-Time Cleaning and Refinement of Facial Animation Signals
- Authors: Elo\"ise Berson, Catherine Soladi\'e, Nicolas Stoiber
- Abstract summary: We propose a real-time animation refining system that preserves -- or even restores -- the natural dynamics of facial motions.
We leverage an off-the-shelf recurrent neural network architecture that learns proper facial dynamics patterns on clean animation data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing demand for real-time animated 3D content in the
entertainment industry and beyond, performance-based animation has garnered
interest among both academic and industrial communities. While recent solutions
for motion-capture animation have achieved impressive results, handmade
post-processing is often needed, as the generated animations often contain
artifacts. Existing real-time motion capture solutions have opted for standard
signal processing methods to strengthen temporal coherence of the resulting
animations and remove inaccuracies. While these methods produce smooth results,
they inherently filter-out part of the dynamics of facial motion, such as high
frequency transient movements. In this work, we propose a real-time animation
refining system that preserves -- or even restores -- the natural dynamics of
facial motions. To do so, we leverage an off-the-shelf recurrent neural network
architecture that learns proper facial dynamics patterns on clean animation
data. We parametrize our system using the temporal derivatives of the signal,
enabling our network to process animations at any framerate. Qualitative
results show that our system is able to retrieve natural motion signals from
noisy or degraded input animation.
Related papers
- Spectral Motion Alignment for Video Motion Transfer using Diffusion Models [54.32923808964701]
Spectral Motion Alignment (SMA) is a framework that refines and aligns motion vectors using Fourier and wavelet transforms.
SMA learns motion patterns by incorporating frequency-domain regularization, facilitating the learning of whole-frame global motion dynamics.
Extensive experiments demonstrate SMA's efficacy in improving motion transfer while maintaining computational efficiency and compatibility across various video customization frameworks.
arXiv Detail & Related papers (2024-03-22T14:47:18Z) - Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation [27.700371215886683]
diffusion models have become the mainstream in visual generation research, owing to their robust generative capabilities.
In this paper, we propose a novel framework tailored for character animation.
By expanding the training data, our approach can animate arbitrary characters, yielding superior results in character animation compared to other image-to-video methods.
arXiv Detail & Related papers (2023-11-28T12:27:15Z) - MagicAnimate: Temporally Consistent Human Image Animation using
Diffusion Model [74.84435399451573]
This paper studies the human image animation task, which aims to generate a video of a certain reference identity following a particular motion sequence.
Existing animation works typically employ the frame-warping technique to animate the reference image towards the target motion.
We introduce MagicAnimate, a diffusion-based framework that aims at enhancing temporal consistency, preserving reference image faithfully, and improving animation fidelity.
arXiv Detail & Related papers (2023-11-27T18:32:31Z) - AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance [13.416296247896042]
We introduce an open domain image animation method that leverages the motion prior of video diffusion model.
Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control of the movable area and its motion speed.
We validate the effectiveness of our method through rigorous experiments on an open-domain dataset.
arXiv Detail & Related papers (2023-11-21T03:47:54Z) - CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior [27.989344587876964]
Speech-driven 3D facial animation has been widely studied, yet there is still a gap to achieving realism and vividness.
We propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook.
We demonstrate that our approach outperforms current state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-01-06T05:04:32Z) - Towards Lightweight Neural Animation : Exploration of Neural Network
Pruning in Mixture of Experts-based Animation Models [3.1733862899654652]
We apply pruning algorithms to compress a neural network in the context of interactive character animation.
With the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model.
This work demonstrates that, with the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model.
arXiv Detail & Related papers (2022-01-11T16:39:32Z) - Dance In the Wild: Monocular Human Animation with Neural Dynamic
Appearance Synthesis [56.550999933048075]
We propose a video based synthesis method that tackles challenges and demonstrates high quality results for in-the-wild videos.
We introduce a novel motion signature that is used to modulate the generator weights to capture dynamic appearance changes.
We evaluate our method on a set of challenging videos and show that our approach achieves state-of-the art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-11-10T20:18:57Z) - MeshTalk: 3D Face Animation from Speech using Cross-Modality
Disentanglement [142.9900055577252]
We propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.
Our approach ensures highly accurate lip motion, while also plausible animation of the parts of the face that are uncorrelated to the audio signal, such as eye blinks and eye brow motion.
arXiv Detail & Related papers (2021-04-16T17:05:40Z) - High-Fidelity Neural Human Motion Transfer from Monocular Video [71.75576402562247]
Video-based human motion transfer creates video animations of humans following a source motion.
We present a new framework which performs high-fidelity and temporally-consistent human motion transfer with natural pose-dependent non-rigid deformations.
In the experimental results, we significantly outperform the state-of-the-art in terms of video realism.
arXiv Detail & Related papers (2020-12-20T16:54:38Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z) - A Robust Interactive Facial Animation Editing System [0.0]
We propose a new learning-based approach to easily edit a facial animation from a set of intuitive control parameters.
We use a resolution-preserving fully convolutional neural network that maps control parameters to blendshapes coefficients sequences.
The proposed system is robust and can handle coarse, exaggerated edits from non-specialist users.
arXiv Detail & Related papers (2020-07-18T08:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.