Towards Multimodal Multitask Scene Understanding Models for Indoor
Mobile Agents
- URL: http://arxiv.org/abs/2209.13156v1
- Date: Tue, 27 Sep 2022 04:49:19 GMT
- Title: Towards Multimodal Multitask Scene Understanding Models for Indoor
Mobile Agents
- Authors: Yao-Hung Hubert Tsai, Hanlin Goh, Ali Farhadi, Jian Zhang
- Abstract summary: In this paper, we discuss the main challenge: insufficient, or even no, labeled data for real-world indoor environments.
We describe MMISM (Multi-modality input Multi-task output Indoor Scene understanding Model) to tackle the above challenges.
MMISM considers RGB images as well as sparse Lidar points as inputs and 3D object detection, depth completion, human pose estimation, and semantic segmentation as output tasks.
We show that MMISM performs on par or even better than single-task models.
- Score: 49.904531485843464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The perception system in personalized mobile agents requires developing
indoor scene understanding models, which can understand 3D geometries, capture
objectiveness, analyze human behaviors, etc. Nonetheless, this direction has
not been well-explored in comparison with models for outdoor environments
(e.g., the autonomous driving system that includes pedestrian prediction, car
detection, traffic sign recognition, etc.). In this paper, we first discuss the
main challenge: insufficient, or even no, labeled data for real-world indoor
environments, and other challenges such as fusion between heterogeneous sources
of information (e.g., RGB images and Lidar point clouds), modeling
relationships between a diverse set of outputs (e.g., 3D object locations,
depth estimation, and human poses), and computational efficiency. Then, we
describe MMISM (Multi-modality input Multi-task output Indoor Scene
understanding Model) to tackle the above challenges. MMISM considers RGB images
as well as sparse Lidar points as inputs and 3D object detection, depth
completion, human pose estimation, and semantic segmentation as output tasks.
We show that MMISM performs on par or even better than single-task models;
e.g., we improve the baseline 3D object detection results by 11.7% on the
benchmark ARKitScenes dataset.
Related papers
- LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image [72.14973729674995]
Current 3D perception methods, particularly small models, struggle with processing logical reasoning, question-answering, and handling open scenario categories.
We propose solutions: Spatial-Enhanced Local Feature Mining for better spatial feature extraction, 3D Query Token-Derived Info Decoding for precise geometric regression, and Geometry Projection-Based 3D Reasoning for handling camera focal length variations.
arXiv Detail & Related papers (2024-08-14T10:00:16Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Towards Precise 3D Human Pose Estimation with Multi-Perspective Spatial-Temporal Relational Transformers [28.38686299271394]
We propose a framework for 3D sequence-to-sequence (seq2seq) human pose detection.
Firstly, the spatial module represents the human pose feature by intra-image content, while the frame-image relation module extracts temporal relationships.
Our method is evaluated on Human3.6M, a popular 3D human pose detection dataset.
arXiv Detail & Related papers (2024-01-30T03:00:25Z) - MMRDN: Consistent Representation for Multi-View Manipulation
Relationship Detection in Object-Stacked Scenes [62.20046129613934]
We propose a novel multi-view fusion framework, namely multi-view MRD network (MMRDN)
We project the 2D data from different views into a common hidden space and fit the embeddings with a set of Von-Mises-Fisher distributions.
We select a set of $K$ Maximum Vertical Neighbors (KMVN) points from the point cloud of each object pair, which encodes the relative position of these two objects.
arXiv Detail & Related papers (2023-04-25T05:55:29Z) - CROMOSim: A Deep Learning-based Cross-modality Inertial Measurement
Simulator [7.50015216403068]
Inertial measurement unit (IMU) data has been utilized in monitoring and assessment of human mobility.
To mitigate the data scarcity problem, we design CROMOSim, a cross-modality sensor simulator.
It simulates high fidelity virtual IMU sensor data from motion capture systems or monocular RGB cameras.
arXiv Detail & Related papers (2022-02-21T22:30:43Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Exploring the Capabilities and Limits of 3D Monocular Object Detection
-- A Study on Simulation and Real World Data [0.0]
3D object detection based on monocular camera data is key enabler for autonomous driving.
Recent deep learning methods show promising results to recover depth information from single images.
In this paper, we evaluate the performance of a 3D object detection pipeline which is parameterizable with different depth estimation configurations.
arXiv Detail & Related papers (2020-05-15T09:05:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.