Wassersplines for Stylized Neural Animation
- URL: http://arxiv.org/abs/2201.11940v1
- Date: Fri, 28 Jan 2022 05:36:02 GMT
- Title: Wassersplines for Stylized Neural Animation
- Authors: Paul Zhang, Dmitriy Smirnov, Justin Solomon
- Abstract summary: Much of computer-generated animation is created by manipulating meshes with rigs.
We introducesplines, a novel inference method for animating unstructured densities.
We demonstrate our tool on various problems to produce temporally-coherent animations without meshing or rigging.
- Score: 36.43240177060714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much of computer-generated animation is created by manipulating meshes with
rigs. While this approach works well for animating articulated objects like
animals, it has limited flexibility for animating less structured creatures
such as the Drunn in "Raya and the Last Dragon." We introduce Wassersplines, a
novel trajectory inference method for animating unstructured densities based on
recent advances in continuous normalizing flows and optimal transport. The key
idea is to train a neurally-parameterized velocity field that represents the
motion between keyframes. Trajectories are then computed by pushing keyframes
through the velocity field. We solve an additional Wasserstein barycenter
interpolation problem to guarantee strict adherence to keyframes. Our tool can
stylize trajectories through a variety of PDE-based regularizers to create
different visual effects. We demonstrate our tool on various keyframe
interpolation problems to produce temporally-coherent animations without
meshing or rigging.
Related papers
- Thin-Plate Spline-based Interpolation for Animation Line Inbetweening [54.69811179222127]
Chamfer Distance (CD) is commonly adopted for evaluating inbetweening performance.
We propose a simple yet effective method for animation line inbetweening that adopts thin-plate spline-based transformation.
Our method outperforms existing approaches by delivering high-quality results with enhanced fluidity.
arXiv Detail & Related papers (2024-08-17T08:05:31Z) - Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics [67.97235923372035]
We present Puppet-Master, an interactive video generative model that can serve as a motion prior for part-level dynamics.
At test time, given a single image and a sparse set of motion trajectories, Puppet-Master can synthesize a video depicting realistic part-level motion faithful to the given drag interactions.
arXiv Detail & Related papers (2024-08-08T17:59:38Z) - AnimateZero: Video Diffusion Models are Zero-Shot Image Animators [63.938509879469024]
We propose AnimateZero to unveil the pre-trained text-to-video diffusion model, i.e., AnimateDiff.
For appearance control, we borrow intermediate latents and their features from the text-to-image (T2I) generation.
For temporal control, we replace the global temporal attention of the original T2V model with our proposed positional-corrected window attention.
arXiv Detail & Related papers (2023-12-06T13:39:35Z) - AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance [13.416296247896042]
We introduce an open domain image animation method that leverages the motion prior of video diffusion model.
Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control of the movable area and its motion speed.
We validate the effectiveness of our method through rigorous experiments on an open-domain dataset.
arXiv Detail & Related papers (2023-11-21T03:47:54Z) - Image2Gif: Generating Continuous Realistic Animations with Warping NODEs [0.8218964199015377]
We propose a new framework, Warping Neural ODE, for generating a smooth animation (video frame) in a continuous manner.
This allows us to achieve the smoothness and the realism of an animation with infinitely small time steps between the frames.
We show the application of our work in generating an animation given two frames, in different training settings, including Generative Adversarial Network (GAN) and with $L$ loss.
arXiv Detail & Related papers (2022-05-09T18:39:47Z) - Improving the Perceptual Quality of 2D Animation Interpolation [37.04208600867858]
Traditional 2D animation is labor-intensive, often requiring animators to draw twelve illustrations per second of movement.
Lower framerates result in larger displacements and occlusions, discrete perceptual elements (e.g. lines and solid-color regions) pose difficulties for texture-oriented convolutional networks.
Previous work tried addressing these issues, but used unscalable methods and focused on pixel-perfect performance.
We build a scalable system more appropriately centered on perceptual quality for this artistic domain.
arXiv Detail & Related papers (2021-11-24T20:51:29Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - Deep Animation Video Interpolation in the Wild [115.24454577119432]
In this work, we formally define and study the animation video code problem for the first time.
We propose an effective framework, AnimeInterp, with two dedicated modules in a coarse-to-fine manner.
Notably, AnimeInterp shows favorable perceptual quality and robustness for animation scenarios in the wild.
arXiv Detail & Related papers (2021-04-06T13:26:49Z) - A Robust Interactive Facial Animation Editing System [0.0]
We propose a new learning-based approach to easily edit a facial animation from a set of intuitive control parameters.
We use a resolution-preserving fully convolutional neural network that maps control parameters to blendshapes coefficients sequences.
The proposed system is robust and can handle coarse, exaggerated edits from non-specialist users.
arXiv Detail & Related papers (2020-07-18T08:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.