Situational Graphs for Robot Navigation in Structured Indoor
Environments
- URL: http://arxiv.org/abs/2202.12197v1
- Date: Thu, 24 Feb 2022 16:59:06 GMT
- Title: Situational Graphs for Robot Navigation in Structured Indoor
Environments
- Authors: Hriday Bavle, Jose Luis Sanchez-Lopez, Muhammad Shaheer, Javier
Civera, Holger Voos
- Abstract summary: We present a real-time online built Situational Graphs (S-Graphs) composed of a single graph representing the environment.
Our method utilizes odometry readings and planar surfaces extracted from 3D LiDAR scans, to construct and optimize in real-time a three layered S-Graph.
Our proposal does not only demonstrate state-of-the-art results for pose estimation of the robot, but also contributes with a metric-semantic-topological model of the environment.
- Score: 9.13466172688693
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous mobile robots should be aware of their situation, understood as a
comprehensive understanding of the environment along with the estimation of its
own state, to successfully make decisions and execute tasks in natural
environments. 3D scene graphs are an emerging field of research with great
potential to represent these situations in a joint model comprising geometric,
semantic and relational/topological dimensions. Although 3D scene graphs have
already been utilized for this, further research is still required to
effectively deploy them on-board mobile robots.
To this end, we present in this paper a real-time online built Situational
Graphs (S-Graphs), composed of a single graph representing the environment,
while simultaneously improving the robot pose estimation. Our method utilizes
odometry readings and planar surfaces extracted from 3D LiDAR scans, to
construct and optimize in real-time a three layered S-Graph that includes a
robot tracking layer where the robot poses are registered, a metric-semantic
layer with features such as planar walls and our novel topological layer
constraining higher-level features such as corridors and rooms. Our proposal
does not only demonstrate state-of-the-art results for pose estimation of the
robot, but also contributes with a metric-semantic-topological model of the
environment
Related papers
- SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - Care3D: An Active 3D Object Detection Dataset of Real Robotic-Care
Environments [52.425280825457385]
This paper introduces an annotated dataset of real environments.
The captured environments represent areas which are already in use in the field of robotic health care research.
We also provide ground truth data within one room, for assessing SLAM algorithms running directly on a health care robot.
arXiv Detail & Related papers (2023-10-09T10:35:37Z) - SG-Bot: Object Rearrangement via Coarse-to-Fine Robotic Imagination on Scene Graphs [81.15889805560333]
We present SG-Bot, a novel rearrangement framework.
SG-Bot exemplifies lightweight, real-time, and user-controllable characteristics.
Experimental results demonstrate that SG-Bot outperforms competitors by a large margin.
arXiv Detail & Related papers (2023-09-21T15:54:33Z) - S-Graphs+: Real-time Localization and Mapping leveraging Hierarchical
Representations [9.13466172688693]
S-Graphs+ is a novel four-layered factor graph that includes: (1) a pose layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level.
The above graph is optimized in real-time to obtain a robust and accurate estimate of the robots pose and its map, simultaneously constructing and leveraging high-level information of the environment.
arXiv Detail & Related papers (2022-12-22T15:06:21Z) - Advanced Situational Graphs for Robot Navigation in Structured Indoor
Environments [9.13466172688693]
We present an advanced version of the Situational Graphs (S-Graphs+), consisting of the five layered optimizable graph.
S-Graphs+ demonstrates improved performance over S-Graphs efficiently extracting the room information.
arXiv Detail & Related papers (2022-11-16T08:30:05Z) - Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from
Depth Maps [66.24554680709417]
Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications.
We propose a non-invasive framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera.
arXiv Detail & Related papers (2022-07-06T08:52:12Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Reasoning with Scene Graphs for Robot Planning under Partial
Observability [7.121002367542985]
We develop an algorithm called scene analysis for robot planning (SARP) that enables robots to reason with visual contextual information.
Experiments have been conducted using multiple 3D environments in simulation, and a dataset collected by a real robot.
arXiv Detail & Related papers (2022-02-21T18:45:56Z) - OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer
Learning for Telepresence Robotics [124.08684545010664]
Scene graph generation from images is a task of great interest to applications such as robotics.
We propose an initial approximation to a framework called Ontology-Guided Scene Graph Generation (OG-SGG)
arXiv Detail & Related papers (2022-02-21T13:23:15Z) - 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places,
Objects, and Humans [27.747241700017728]
We present a unified representation for actionable spatial perception: 3D Dynamic Scene Graphs.
3D Dynamic Scene Graphs can have a profound impact on planning and decision-making, human-robot interaction, long-term autonomy, and scene prediction.
arXiv Detail & Related papers (2020-02-15T00:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.