A Robust Interactive Facial Animation Editing System
- URL: http://arxiv.org/abs/2007.09367v1
- Date: Sat, 18 Jul 2020 08:31:02 GMT
- Title: A Robust Interactive Facial Animation Editing System
- Authors: Elo\"ise Berson, Catherine Soladi\'e, Vincent Barrielle, Nicolas
Stoiber
- Abstract summary: We propose a new learning-based approach to easily edit a facial animation from a set of intuitive control parameters.
We use a resolution-preserving fully convolutional neural network that maps control parameters to blendshapes coefficients sequences.
The proposed system is robust and can handle coarse, exaggerated edits from non-specialist users.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past few years, the automatic generation of facial animation for
virtual characters has garnered interest among the animation research and
industry communities. Recent research contributions leverage machine-learning
approaches to enable impressive capabilities at generating plausible facial
animation from audio and/or video signals. However, these approaches do not
address the problem of animation edition, meaning the need for correcting an
unsatisfactory baseline animation or modifying the animation content itself. In
facial animation pipelines, the process of editing an existing animation is
just as important and time-consuming as producing a baseline. In this work, we
propose a new learning-based approach to easily edit a facial animation from a
set of intuitive control parameters. To cope with high-frequency components in
facial movements and preserve a temporal coherency in the animation, we use a
resolution-preserving fully convolutional neural network that maps control
parameters to blendshapes coefficients sequences. We stack an additional
resolution-preserving animation autoencoder after the regressor to ensure that
the system outputs natural-looking animation. The proposed system is robust and
can handle coarse, exaggerated edits from non-specialist users. It also retains
the high-frequency motion of the facial animation.
Related papers
- Audio2Rig: Artist-oriented deep learning tool for facial animation [0.0]
Audio2Rig is a new deep learning tool leveraging previously animated sequences of a show, to generate facial and lip sync rig animation from an audio file.
Based in Maya, it learns from any production rig without any adjustment and generates high quality and stylized animations.
Our method shows excellent results, generating fine animation details while respecting the show style.
arXiv Detail & Related papers (2024-05-30T18:37:21Z) - AnimateZoo: Zero-shot Video Generation of Cross-Species Animation via Subject Alignment [64.02822911038848]
We present AnimateZoo, a zero-shot diffusion-based video generator to produce animal animations.
Key technique used in our AnimateZoo is subject alignment, which includes two steps.
Our model is capable of generating videos characterized by accurate movements, consistent appearance, and high-fidelity frames.
arXiv Detail & Related papers (2024-04-07T12:57:41Z) - Bring Your Own Character: A Holistic Solution for Automatic Facial
Animation Generation of Customized Characters [24.615066741391125]
We propose a holistic solution to automatically animate virtual human faces.
A deep learning model was first trained to retarget the facial expression from input face images to virtual human faces.
A practical toolkit was developed using Unity 3D, making it compatible with the most popular VR applications.
arXiv Detail & Related papers (2024-02-21T11:35:20Z) - AnimateZero: Video Diffusion Models are Zero-Shot Image Animators [63.938509879469024]
We propose AnimateZero to unveil the pre-trained text-to-video diffusion model, i.e., AnimateDiff.
For appearance control, we borrow intermediate latents and their features from the text-to-image (T2I) generation.
For temporal control, we replace the global temporal attention of the original T2V model with our proposed positional-corrected window attention.
arXiv Detail & Related papers (2023-12-06T13:39:35Z) - MagicAnimate: Temporally Consistent Human Image Animation using
Diffusion Model [74.84435399451573]
This paper studies the human image animation task, which aims to generate a video of a certain reference identity following a particular motion sequence.
Existing animation works typically employ the frame-warping technique to animate the reference image towards the target motion.
We introduce MagicAnimate, a diffusion-based framework that aims at enhancing temporal consistency, preserving reference image faithfully, and improving animation fidelity.
arXiv Detail & Related papers (2023-11-27T18:32:31Z) - AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance [13.416296247896042]
We introduce an open domain image animation method that leverages the motion prior of video diffusion model.
Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control of the movable area and its motion speed.
We validate the effectiveness of our method through rigorous experiments on an open-domain dataset.
arXiv Detail & Related papers (2023-11-21T03:47:54Z) - Audio-Driven Talking Face Generation with Diverse yet Realistic Facial
Animations [61.65012981435094]
DIRFA is a novel method that can generate talking faces with diverse yet realistic facial animations from the same driving audio.
To accommodate fair variation of plausible facial animations for the same audio, we design a transformer-based probabilistic mapping network.
We show that DIRFA can generate talking faces with realistic facial animations effectively.
arXiv Detail & Related papers (2023-04-18T12:36:15Z) - MeshTalk: 3D Face Animation from Speech using Cross-Modality
Disentanglement [142.9900055577252]
We propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.
Our approach ensures highly accurate lip motion, while also plausible animation of the parts of the face that are uncorrelated to the audio signal, such as eye blinks and eye brow motion.
arXiv Detail & Related papers (2021-04-16T17:05:40Z) - Deep Animation Video Interpolation in the Wild [115.24454577119432]
In this work, we formally define and study the animation video code problem for the first time.
We propose an effective framework, AnimeInterp, with two dedicated modules in a coarse-to-fine manner.
Notably, AnimeInterp shows favorable perceptual quality and robustness for animation scenarios in the wild.
arXiv Detail & Related papers (2021-04-06T13:26:49Z) - Going beyond Free Viewpoint: Creating Animatable Volumetric Video of
Human Performances [7.7824496657259665]
We present an end-to-end pipeline for the creation of high-quality animatable volumetric video content of human performances.
Semantic enrichment and geometric animation ability are achieved by establishing temporal consistency in the 3D data.
For pose editing, we exploit the captured data as much as possible and kinematically deform the captured frames to fit a desired pose.
arXiv Detail & Related papers (2020-09-02T09:46:12Z) - Real-Time Cleaning and Refinement of Facial Animation Signals [0.0]
We propose a real-time animation refining system that preserves -- or even restores -- the natural dynamics of facial motions.
We leverage an off-the-shelf recurrent neural network architecture that learns proper facial dynamics patterns on clean animation data.
arXiv Detail & Related papers (2020-08-04T05:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.