Markerless Body Motion Capturing for 3D Character Animation based on
Multi-view Cameras
- URL: http://arxiv.org/abs/2212.05788v1
- Date: Mon, 12 Dec 2022 09:26:21 GMT
- Title: Markerless Body Motion Capturing for 3D Character Animation based on
Multi-view Cameras
- Authors: Jinbao Wang, Ke Lu, Jian Xue
- Abstract summary: This paper proposes a novel application system for the generation of 3D character animation driven by markerless human body motion capturing.
The main objective of this study is to generate a 3D skeleton and animation for 3D characters using multi-view images captured by ordinary cameras.
The experimental results reveal that our system can effectively and efficiently capture human actions and use them to animate 3D cartoon characters in real-time.
- Score: 13.91432217561254
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a novel application system for the generation of
three-dimensional (3D) character animation driven by markerless human body
motion capturing. The entire pipeline of the system consists of five stages: 1)
the capturing of motion data using multiple cameras, 2) detection of the
two-dimensional (2D) human body joints, 3) estimation of the 3D joints, 4)
calculation of bone transformation matrices, and 5) generation of character
animation. The main objective of this study is to generate a 3D skeleton and
animation for 3D characters using multi-view images captured by ordinary
cameras. The computational complexity of the 3D skeleton reconstruction based
on 3D vision has been reduced as needed to achieve frame-by-frame motion
capturing. The experimental results reveal that our system can effectively and
efficiently capture human actions and use them to animate 3D cartoon characters
in real-time.
Related papers
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation [68.04052669266174]
We construct a large-scale multi-modal 3D facial animation dataset, MMHead.
MMHead consists of 49 hours of 3D facial motion sequences, speech audios, and rich hierarchical text annotations.
Based on the MMHead dataset, we establish benchmarks for two new tasks: text-induced 3D talking head animation and text-to-3D facial motion generation.
arXiv Detail & Related papers (2024-10-10T09:37:01Z) - SpatialTracker: Tracking Any 2D Pixels in 3D Space [71.58016288648447]
We propose to estimate point trajectories in 3D space to mitigate the issues caused by image projection.
Our method, named SpatialTracker, lifts 2D pixels to 3D using monocular depth estimators.
Tracking in 3D allows us to leverage as-rigid-as-possible (ARAP) constraints while simultaneously learning a rigidity embedding that clusters pixels into different rigid parts.
arXiv Detail & Related papers (2024-04-05T17:59:25Z) - Unsupervised Volumetric Animation [54.52012366520807]
We propose a novel approach for unsupervised 3D animation of non-rigid deformable objects.
Our method learns the 3D structure and dynamics of objects solely from single-view RGB videos.
We show our model can obtain animatable 3D objects from a single volume or few images.
arXiv Detail & Related papers (2023-01-26T18:58:54Z) - Physically Plausible Animation of Human Upper Body from a Single Image [41.027391105867345]
We present a new method for generating controllable, dynamically responsive, and photorealistic human animations.
Given an image of a person, our system allows the user to generate Physically plausible Upper Body Animation (PUBA) using interaction in the image space.
arXiv Detail & Related papers (2022-12-09T09:36:59Z) - Gait Recognition in the Wild with Dense 3D Representations and A
Benchmark [86.68648536257588]
Existing studies for gait recognition are dominated by 2D representations like the silhouette or skeleton of the human body in constrained scenes.
This paper aims to explore dense 3D representations for gait recognition in the wild.
We build the first large-scale 3D representation-based gait recognition dataset, named Gait3D.
arXiv Detail & Related papers (2022-04-06T03:54:06Z) - MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks [77.56526918859345]
We present a novel framework that brings the 3D motion task from controlled environments to in-the-wild scenarios.
It is capable of body motion from a character in a 2D monocular video to a 3D character without using any motion capture system or 3D reconstruction procedure.
arXiv Detail & Related papers (2021-12-19T07:52:05Z) - Action2video: Generating Videos of Human 3D Actions [31.665831044217363]
We aim to tackle the interesting yet challenging problem of generating videos of diverse and natural human motions from prescribed action categories.
Key issue lies in the ability to synthesize multiple distinct motion sequences that are realistic in their visual appearances.
Action2motionally generates plausible 3D pose sequences of a prescribed action category, which are processed and rendered by motion2video to form 2D videos.
arXiv Detail & Related papers (2021-11-12T20:20:37Z) - ZooBuilder: 2D and 3D Pose Estimation for Quadrupeds Using Synthetic
Data [2.3661942553209236]
We train 2D and 3D pose estimation models with synthetic data, and put in place an end-to-end pipeline called ZooBuilder.
The pipeline takes as input a video of an animal in the wild, and generates the corresponding 2D and 3D coordinates for each joint of the animal's skeleton.
arXiv Detail & Related papers (2020-09-01T07:41:20Z) - Kinematic 3D Object Detection in Monocular Video [123.7119180923524]
We propose a novel method for monocular video-based 3D object detection which carefully leverages kinematic motion to improve precision of 3D localization.
We achieve state-of-the-art performance on monocular 3D object detection and the Bird's Eye View tasks within the KITTI self-driving dataset.
arXiv Detail & Related papers (2020-07-19T01:15:12Z) - AnimePose: Multi-person 3D pose estimation and animation [9.323689681059504]
3D animation of humans in action is quite challenging as it involves using a huge setup with several motion trackers all over the person's body to track the movements of every limb.
This is time-consuming and may cause the person discomfort in wearing exoskeleton body suits with motion sensors.
We present a solution to generate 3D animation of multiple persons from a 2D video using deep learning.
arXiv Detail & Related papers (2020-02-06T11:11:56Z) - Synergetic Reconstruction from 2D Pose and 3D Motion for Wide-Space
Multi-Person Video Motion Capture in the Wild [3.0015034534260665]
We propose a markerless motion capture method with accuracy and smoothness from multiple cameras.
The proposed method predicts each persons 3D pose and determines bounding box of multi-camera images.
We evaluated the proposed method using various datasets and a real sports field.
arXiv Detail & Related papers (2020-01-16T02:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.