Learning-based pose edition for efficient and interactive design
- URL: http://arxiv.org/abs/2107.00397v1
- Date: Thu, 1 Jul 2021 12:15:02 GMT
- Title: Learning-based pose edition for efficient and interactive design
- Authors: L\'eon Victor (LIRIS, INSA Lyon), Alexandre Meyer (LIRIS, UCBL),
Sa\"ida Bouakaz (LIRIS, UCBL)
- Abstract summary: In computer-aided animation artists define the key poses of a character by manipulating its skeletons.
Character pose must respect many ill-defined constraints, and so the resulting realism greatly depends on the animator's skill and knowledge.
We describe an efficient tool for pose design, allowing users to intuitively manipulate a pose to create character animations.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Authoring an appealing animation for a virtual character is a challenging
task. In computer-aided keyframe animation artists define the key poses of a
character by manipulating its underlying skeletons. To look plausible, a
character pose must respect many ill-defined constraints, and so the resulting
realism greatly depends on the animator's skill and knowledge. Animation
software provide tools to help in this matter, relying on various algorithms to
automatically enforce some of these constraints. The increasing availability of
motion capture data has raised interest in data-driven approaches to pose
design, with the potential of shifting more of the task of assessing realism
from the artist to the computer, and to provide easier access to nonexperts. In
this article, we propose such a method, relying on neural networks to
automatically learn the constraints from the data. We describe an efficient
tool for pose design, allowing na{\"i}ve users to intuitively manipulate a pose
to create character animations.
Related papers
- Dynamic Typography: Bringing Text to Life via Video Diffusion Prior [73.72522617586593]
We present an automated text animation scheme, termed "Dynamic Typography"
It deforms letters to convey semantic meaning and infuses them with vibrant movements based on user prompts.
Our technique harnesses vector graphics representations and an end-to-end optimization-based framework.
arXiv Detail & Related papers (2024-04-17T17:59:55Z) - Bring Your Own Character: A Holistic Solution for Automatic Facial
Animation Generation of Customized Characters [24.615066741391125]
We propose a holistic solution to automatically animate virtual human faces.
A deep learning model was first trained to retarget the facial expression from input face images to virtual human faces.
A practical toolkit was developed using Unity 3D, making it compatible with the most popular VR applications.
arXiv Detail & Related papers (2024-02-21T11:35:20Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation [27.700371215886683]
diffusion models have become the mainstream in visual generation research, owing to their robust generative capabilities.
In this paper, we propose a novel framework tailored for character animation.
By expanding the training data, our approach can animate arbitrary characters, yielding superior results in character animation compared to other image-to-video methods.
arXiv Detail & Related papers (2023-11-28T12:27:15Z) - Breathing Life Into Sketches Using Text-to-Video Priors [101.8236605955899]
A sketch is one of the most intuitive and versatile tools humans use to convey their ideas visually.
In this work, we present a method that automatically adds motion to a single-subject sketch.
The output is a short animation provided in vector representation, which can be easily edited.
arXiv Detail & Related papers (2023-11-21T18:09:30Z) - Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior [48.104051952928465]
Current learning-based motion synthesis methods depend on extensive motion datasets.
pose data is more accessible, since posed characters are easier to create and can even be extracted from images.
Our method generates plausible motions for characters that have only pose data by transferring motion from an existing motion capture dataset of another character.
arXiv Detail & Related papers (2023-10-31T08:13:00Z) - Physics-based Motion Retargeting from Sparse Inputs [73.94570049637717]
Commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose.
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available.
arXiv Detail & Related papers (2023-07-04T21:57:05Z) - SketchBetween: Video-to-Video Synthesis for Sprite Animation via
Sketches [0.9645196221785693]
2D animation is a common factor in game development, used for characters, effects and background art.
Automated animation approaches exist, but are designed without animators in mind.
We propose a problem formulation that adheres more closely to the standard workflow of animation.
arXiv Detail & Related papers (2022-09-01T02:43:19Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.