Simultaneously Recovering Multi-Person Meshes and Multi-View Cameras with Human Semantics
- URL: http://arxiv.org/abs/2412.18785v1
- Date: Wed, 25 Dec 2024 05:35:30 GMT
- Title: Simultaneously Recovering Multi-Person Meshes and Multi-View Cameras with Human Semantics
- Authors: Buzhen Huang, Jingyi Ju, Yuan Shu, Yangang Wang,
- Abstract summary: We focus on multi-person motion capture with uncalibrated cameras.
Key idea is to incorporate motion prior knowledge to simultaneously estimate camera parameters and human meshes.
Results show that accurate camera parameters and human motions can be obtained through a one-step reconstruction.
- Score: 14.385538705947782
- License:
- Abstract: Dynamic multi-person mesh recovery has broad applications in sports broadcasting, virtual reality, and video games. However, current multi-view frameworks rely on a time-consuming camera calibration procedure. In this work, we focus on multi-person motion capture with uncalibrated cameras, which mainly faces two challenges: one is that inter-person interactions and occlusions introduce inherent ambiguities for both camera calibration and motion capture; the other is that a lack of dense correspondences can be used to constrain sparse camera geometries in a dynamic multi-person scene. Our key idea is to incorporate motion prior knowledge to simultaneously estimate camera parameters and human meshes from noisy human semantics. We first utilize human information from 2D images to initialize intrinsic and extrinsic parameters. Thus, the approach does not rely on any other calibration tools or background features. Then, a pose-geometry consistency is introduced to associate the detected humans from different views. Finally, a latent motion prior is proposed to refine the camera parameters and human motions. Experimental results show that accurate camera parameters and human motions can be obtained through a one-step reconstruction. The code are publicly available at~\url{https://github.com/boycehbz/DMMR}.
Related papers
- EgoGaussian: Dynamic Scene Understanding from Egocentric Video with 3D Gaussian Splatting [95.44545809256473]
EgoGaussian is a method capable of simultaneously reconstructing 3D scenes and dynamically tracking 3D object motion from RGB egocentric input alone.
We show significant improvements in terms of both dynamic object and background reconstruction quality compared to the state-of-the-art.
arXiv Detail & Related papers (2024-06-28T10:39:36Z) - WHAC: World-grounded Humans and Cameras [37.877565981937586]
We aim to recover expressive parametric human models (i.e., SMPL-X) and corresponding camera poses jointly.
We introduce a novel framework, referred to as WHAC, to facilitate world-grounded expressive human pose and shape estimation.
We present a new synthetic dataset, WHAC-A-Mole, which includes accurately annotated humans and cameras.
arXiv Detail & Related papers (2024-03-19T17:58:02Z) - MUC: Mixture of Uncalibrated Cameras for Robust 3D Human Body Reconstruction [12.942635715952525]
Multiple cameras can provide comprehensive multi-view video coverage of a person.
Previous studies have overlooked the challenges posed by self-occlusion under multiple views.
We introduce a method to reconstruct the 3D human body from multiple uncalibrated camera views.
arXiv Detail & Related papers (2024-03-08T05:03:25Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Decoupling Human and Camera Motion from Videos in the Wild [67.39432972193929]
We propose a method to reconstruct global human trajectories from videos in the wild.
Our method decouples the camera and human motion, which allows us to place people in the same world coordinate frame.
arXiv Detail & Related papers (2023-02-24T18:59:15Z) - Scene-Aware 3D Multi-Human Motion Capture from a Single Camera [83.06768487435818]
We consider the problem of estimating the 3D position of multiple humans in a scene as well as their body shape and articulation from a single RGB video recorded with a static camera.
We leverage recent advances in computer vision using large-scale pre-trained models for a variety of modalities, including 2D body joints, joint angles, normalized disparity maps, and human segmentation masks.
In particular, we estimate the scene depth and unique person scale from normalized disparity predictions using the 2D body joints and joint angles.
arXiv Detail & Related papers (2023-01-12T18:01:28Z) - SmartMocap: Joint Estimation of Human and Camera Motion using
Uncalibrated RGB Cameras [49.110201064166915]
Markerless human motion capture (mocap) from multiple RGB cameras is a widely studied problem.
Existing methods either need calibrated cameras or calibrate them relative to a static camera, which acts as the reference frame for the mocap system.
We propose a mocap method which uses multiple static and moving extrinsically uncalibrated RGB cameras.
arXiv Detail & Related papers (2022-09-28T08:21:04Z) - Embodied Scene-aware Human Pose Estimation [25.094152307452]
We propose embodied scene-aware human pose estimation.
Our method is one stage, causal, and recovers global 3D human poses in a simulated environment.
arXiv Detail & Related papers (2022-06-18T03:50:19Z) - Dynamic Multi-Person Mesh Recovery From Uncalibrated Multi-View Cameras [11.225376081130849]
We introduce a physics-geometry consistency to reduce the low and high frequency noises of the detected human semantics.
Then a novel latent motion prior is proposed to simultaneously optimize extrinsic camera parameters and coherent human motions from slightly noisy inputs.
Experimental results show that accurate camera parameters and human motions can be obtained through one-stage optimization.
arXiv Detail & Related papers (2021-10-20T03:19:20Z) - FLEX: Parameter-free Multi-view 3D Human Motion Reconstruction [70.09086274139504]
Multi-view algorithms strongly depend on camera parameters, in particular, the relative positions among the cameras.
We introduce FLEX, an end-to-end parameter-free multi-view model.
We demonstrate results on the Human3.6M and KTH Multi-view Football II datasets.
arXiv Detail & Related papers (2021-05-05T09:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.