Physically Plausible 3D Human-Scene Reconstruction from Monocular RGB
Image using an Adversarial Learning Approach
- URL: http://arxiv.org/abs/2307.14570v1
- Date: Thu, 27 Jul 2023 01:07:15 GMT
- Title: Physically Plausible 3D Human-Scene Reconstruction from Monocular RGB
Image using an Adversarial Learning Approach
- Authors: Sandika Biswas, Kejie Li, Biplab Banerjee, Subhasis Chaudhuri, Hamid
Rezatofighi
- Abstract summary: A key challenge in holistic 3D human-scene reconstruction is to generate a physically plausible 3D scene from a single monocular RGB image.
This paper proposes using an implicit feature representation of the scene elements to distinguish a physically plausible alignment of humans and objects.
Unlike the existing inference-time optimization-based approaches, we use this adversarially trained model to produce a per-frame 3D reconstruction of the scene.
- Score: 26.827712050966
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Holistic 3D human-scene reconstruction is a crucial and emerging research
area in robot perception. A key challenge in holistic 3D human-scene
reconstruction is to generate a physically plausible 3D scene from a single
monocular RGB image. The existing research mainly proposes optimization-based
approaches for reconstructing the scene from a sequence of RGB frames with
explicitly defined physical laws and constraints between different scene
elements (humans and objects). However, it is hard to explicitly define and
model every physical law in every scenario. This paper proposes using an
implicit feature representation of the scene elements to distinguish a
physically plausible alignment of humans and objects from an implausible one.
We propose using a graph-based holistic representation with an encoded physical
representation of the scene to analyze the human-object and object-object
interactions within the scene. Using this graphical representation, we
adversarially train our model to learn the feasible alignments of the scene
elements from the training data itself without explicitly defining the laws and
constraints between them. Unlike the existing inference-time optimization-based
approaches, we use this adversarially trained model to produce a per-frame 3D
reconstruction of the scene that abides by the physical laws and constraints.
Our learning-based method achieves comparable 3D reconstruction quality to
existing optimization-based holistic human-scene reconstruction methods and
does not need inference time optimization. This makes it better suited when
compared to existing methods, for potential use in robotic applications, such
as robot navigation, etc.
Related papers
- Kinematics-based 3D Human-Object Interaction Reconstruction from Single View [10.684643503514849]
Existing methods simply predict the body poses merely rely on network training on some indoor datasets.
We propose a kinematics-based method that can drive the joints of human body to the human-object contact regions accurately.
arXiv Detail & Related papers (2024-07-19T05:44:35Z) - SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - Visibility Aware Human-Object Interaction Tracking from Single RGB
Camera [40.817960406002506]
We propose a novel method to track the 3D human, object, contacts between them, and their relative translation across frames from a single RGB camera.
We condition our neural field reconstructions for human and object on per-frame SMPL model estimates obtained by pre-fitting SMPL to a video sequence.
Human and object motion from visible frames provides valuable information to infer the occluded object.
arXiv Detail & Related papers (2023-03-29T06:23:44Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Neural Groundplans: Persistent Neural Scene Representations from a
Single Image [90.04272671464238]
We present a method to map 2D image observations of a scene to a persistent 3D scene representation.
We propose conditional neural groundplans as persistent and memory-efficient scene representations.
arXiv Detail & Related papers (2022-07-22T17:41:24Z) - Human-Aware Object Placement for Visual Environment Reconstruction [63.14733166375534]
We show that human-scene interactions can be leveraged to improve the 3D reconstruction of a scene from a monocular RGB video.
Our key idea is that, as a person moves through a scene and interacts with it, we accumulate HSIs across multiple input images.
We show that our scene reconstruction can be used to refine the initial 3D human pose and shape estimation.
arXiv Detail & Related papers (2022-03-07T18:59:02Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Weakly Supervised Learning of Multi-Object 3D Scene Decompositions Using
Deep Shape Priors [69.02332607843569]
PriSMONet is a novel approach for learning Multi-Object 3D scene decomposition and representations from single images.
A recurrent encoder regresses a latent representation of 3D shape, pose and texture of each object from an input RGB image.
We evaluate the accuracy of our model in inferring 3D scene layout, demonstrate its generative capabilities, assess its generalization to real images, and point out benefits of the learned representation.
arXiv Detail & Related papers (2020-10-08T14:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.