Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots
- URL: http://arxiv.org/abs/2409.15493v1
- Date: Mon, 23 Sep 2024 19:25:03 GMT
- Title: Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots
- Authors: Sai Haneesh Allu, Itay Kadosh, Tyler Summers, Yu Xiang,
- Abstract summary: We introduce a new robotic system that enables a mobile robot to autonomously explore an unknown environment.
The robot can semantically map a 93m x 90m floor and update the semantic map once objects are moved in the environment.
- Score: 1.8791971592960612
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new robotic system that enables a mobile robot to autonomously explore an unknown environment, build a semantic map of the environment, and subsequently update the semantic map to reflect environment changes, such as location changes of objects. Our system leverages a LiDAR scanner for 2D occupancy grid mapping and an RGB-D camera for object perception. We introduce a semantic map representation that combines a 2D occupancy grid map for geometry, with a topological map for object semantics. This map representation enables us to effectively update the semantics by deleting or adding nodes to the topological map. Our system has been tested on a Fetch robot. The robot can semantically map a 93m x 90m floor and update the semantic map once objects are moved in the environment.
Related papers
- Memorize What Matters: Emergent Scene Decomposition from Multitraverse [54.487589469432706]
We introduce 3D Gaussian Mapping, a camera-only offline mapping framework grounded in 3D Gaussian Splatting.
3DGM converts multitraverse RGB videos from the same region into a Gaussian-based environmental map while concurrently performing 2D ephemeral object segmentation.
We build the Mapverse benchmark, sourced from the Ithaca365 and nuPlan datasets, to evaluate our method in unsupervised 2D segmentation, 3D reconstruction, and neural rendering.
arXiv Detail & Related papers (2024-05-27T14:11:17Z) - Mapping High-level Semantic Regions in Indoor Environments without
Object Recognition [50.624970503498226]
The present work proposes a method for semantic region mapping via embodied navigation in indoor environments.
To enable region identification, the method uses a vision-to-language model to provide scene information for mapping.
By projecting egocentric scene understanding into the global frame, the proposed method generates a semantic map as a distribution over possible region labels at each location.
arXiv Detail & Related papers (2024-03-11T18:09:50Z) - Object Goal Navigation with Recursive Implicit Maps [92.6347010295396]
We propose an implicit spatial map for object goal navigation.
Our method significantly outperforms the state of the art on the challenging MP3D dataset.
We deploy our model on a real robot and achieve encouraging object goal navigation results in real scenes.
arXiv Detail & Related papers (2023-08-10T14:21:33Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - Object Goal Navigation Based on Semantics and RGB Ego View [9.702784248870522]
This paper presents an architecture and methodology to empower a service robot to navigate an indoor environment with semantic decision making, given RGB ego view.
The robot navigates based on GeoSem map - a relational combination of geometric and semantic map.
The presented approach was found to outperform human users in gamified evaluations with respect to average completion time.
arXiv Detail & Related papers (2022-10-20T19:23:08Z) - Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language
Navigation [87.52136927091712]
We address a practical yet challenging problem of training robot agents to navigate in an environment following a path described by some language instructions.
To achieve accurate and efficient navigation, it is critical to build a map that accurately represents both spatial location and the semantic information of the environment objects.
We propose a multi-granularity map, which contains both object fine-grained details (e.g., color, texture) and semantic classes, to represent objects more comprehensively.
arXiv Detail & Related papers (2022-10-14T04:23:27Z) - Efficient Placard Discovery for Semantic Mapping During Frontier
Exploration [0.0]
This work introduces an Interruptable Frontier Exploration algorithm, enabling the robot to explore its environment to construct its SLAM map while pausing to inspect placards observed during this process.
This allows the robot to autonomously discover room placards without human intervention while speeding up significantly over previous autonomous exploration methods.
arXiv Detail & Related papers (2021-10-27T20:00:07Z) - Indoor Semantic Scene Understanding using Multi-modality Fusion [0.0]
We present a semantic scene understanding pipeline that fuses 2D and 3D detection branches to generate a semantic map of the environment.
Unlike previous works that were evaluated on collected datasets, we test our pipeline on an active photo-realistic robotic environment.
Our novelty includes rectification of 3D proposals using projected 2D detections and modality fusion based on object size.
arXiv Detail & Related papers (2021-08-17T13:30:02Z) - Lifelong update of semantic maps in dynamic environments [2.343080600040765]
A robot understands its world through the raw information it senses from its surroundings.
A semantic map, containing high-level information that both the robot and user understand, is better suited to be a shared representation.
We use the semantic map as the user-facing interface on our fleet of floor-cleaning robots.
arXiv Detail & Related papers (2020-10-17T18:44:33Z) - Extending Maps with Semantic and Contextual Object Information for Robot
Navigation: a Learning-Based Framework using Visual and Depth Cues [12.984393386954219]
This paper addresses the problem of building augmented metric representations of scenes with semantic information from RGB-D images.
We propose a complete framework to create an enhanced map representation of the environment with object-level information.
arXiv Detail & Related papers (2020-03-13T15:05:23Z) - Visual Semantic SLAM with Landmarks for Large-Scale Outdoor Environment [47.96314050446863]
We build a system to creat a semantic 3D map by combining 3D point cloud from ORB SLAM with semantic segmentation information from PSPNet-101 for large-scale environments.
We find a way to associate the real-world landmark with point cloud map and built a topological map based on semantic map.
arXiv Detail & Related papers (2020-01-04T03:34:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.