Towards Lightweight Neural Animation : Exploration of Neural Network
Pruning in Mixture of Experts-based Animation Models
- URL: http://arxiv.org/abs/2201.04042v1
- Date: Tue, 11 Jan 2022 16:39:32 GMT
- Title: Towards Lightweight Neural Animation : Exploration of Neural Network
Pruning in Mixture of Experts-based Animation Models
- Authors: Antoine Maiorca, Nathan Hubens, Sohaib Laraba and Thierry Dutoit
- Abstract summary: We apply pruning algorithms to compress a neural network in the context of interactive character animation.
With the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model.
This work demonstrates that, with the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model.
- Score: 3.1733862899654652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past few years, neural character animation has emerged and offered an
automatic method for animating virtual characters. Their motion is synthesized
by a neural network. Controlling this movement in real time with a user-defined
control signal is also an important task in video games for example. Solutions
based on fully-connected layers (MLPs) and Mixture-of-Experts (MoE) have given
impressive results in generating and controlling various movements with
close-range interactions between the environment and the virtual character.
However, a major shortcoming of fully-connected layers is their computational
and memory cost which may lead to sub-optimized solution. In this work, we
apply pruning algorithms to compress an MLP- MoE neural network in the context
of interactive character animation, which reduces its number of parameters and
accelerates its computation time with a trade-off between this acceleration and
the synthesized motion quality. This work demonstrates that, with the same
number of experts and parameters, the pruned model produces less motion
artifacts than the dense model and the learned high-level motion features are
similar for both
Related papers
- Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics [67.97235923372035]
We present Puppet-Master, an interactive video generative model that can serve as a motion prior for part-level dynamics.
At test time, given a single image and a sparse set of motion trajectories, Puppet-Master can synthesize a video depicting realistic part-level motion faithful to the given drag interactions.
arXiv Detail & Related papers (2024-08-08T17:59:38Z) - Shape Conditioned Human Motion Generation with Diffusion Model [0.0]
We propose a Shape-conditioned Motion Diffusion model (SMD), which enables the generation of motion sequences directly in mesh format.
We also propose a Spectral-Temporal Autoencoder (STAE) to leverage cross-temporal dependencies within the spectral domain.
arXiv Detail & Related papers (2024-05-10T19:06:41Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Single Motion Diffusion [33.81898532874481]
We present SinMDM, a model designed to learn the internal motifs of a single motion sequence with arbitrary topology and synthesize motions of arbitrary length that are faithful to them.
SinMDM can be applied in various contexts, including spatial and temporal in-betweening, motion expansion, style transfer, and crowd animation.
Our results show that SinMDM outperforms existing methods both in quality and time-space efficiency.
arXiv Detail & Related papers (2023-02-12T13:02:19Z) - Diverse Dance Synthesis via Keyframes with Transformer Controllers [10.23813069057791]
We propose a novel motion-based motion generation network based on multiple constraints, which can achieve diverse dance synthesis via learned knowledge.
The backbone of our network is a hierarchical RNN module composed of two long short-term memory (LSTM) units, in which the first LSTM is utilized to embed the posture information of the historical frames into a latent space.
Our framework contains two Transformer-based controllers, which are used to model the constraints of the root trajectory and the velocity factor respectively.
arXiv Detail & Related papers (2022-07-13T00:56:46Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - Unsupervised Motion Representation Learning with Capsule Autoencoders [54.81628825371412]
Motion Capsule Autoencoder (MCAE) models motion in a two-level hierarchy.
MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets.
arXiv Detail & Related papers (2021-10-01T16:52:03Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z) - Neural Face Models for Example-Based Visual Speech Synthesis [2.2817442144155207]
We present a marker-less approach for facial motion capture based on multi-view video.
We learn a neural representation of facial expressions, which is used to seamlessly facial performances during the animation procedure.
arXiv Detail & Related papers (2020-09-22T07:35:33Z) - Generative Tweening: Long-term Inbetweening of 3D Human Motions [40.16462039509098]
We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions.
We trained with 79 classes of captured motion data, our network performs robustly on a variety of highly complex motion styles.
arXiv Detail & Related papers (2020-05-18T17:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.