Lifelong update of semantic maps in dynamic environments
- URL: http://arxiv.org/abs/2010.08846v1
- Date: Sat, 17 Oct 2020 18:44:33 GMT
- Title: Lifelong update of semantic maps in dynamic environments
- Authors: Manjunath Narayana and Andreas Kolling and Lucio Nardelli and Phil
Fong
- Abstract summary: A robot understands its world through the raw information it senses from its surroundings.
A semantic map, containing high-level information that both the robot and user understand, is better suited to be a shared representation.
We use the semantic map as the user-facing interface on our fleet of floor-cleaning robots.
- Score: 2.343080600040765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A robot understands its world through the raw information it senses from its
surroundings. This raw information is not suitable as a shared representation
between the robot and its user. A semantic map, containing high-level
information that both the robot and user understand, is better suited to be a
shared representation. We use the semantic map as the user-facing interface on
our fleet of floor-cleaning robots. Jitter in the robot's sensed raw map,
dynamic objects in the environment, and exploration of new space by the robot
are common challenges for robots. Solving these challenges effectively in the
context of semantic maps is key to enabling semantic maps for lifelong mapping.
First, as a robot senses new changes and alters its raw map in successive runs,
the semantics must be updated appropriately. We update the map using a spatial
transfer of semantics. Second, it is important to keep semantics and their
relative constraints consistent even in the presence of dynamic objects.
Inconsistencies are automatically determined and resolved through the
introduction of a map layer of meta-semantics. Finally, a discovery phase
allows the semantic map to be updated with new semantics whenever the robot
uncovers new information. Deployed commercially on thousands of floor-cleaning
robots in real homes, our user-facing semantic maps provide a intuitive user
experience through a lifelong mapping robot.
Related papers
- Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots [1.8791971592960612]
We introduce a new robotic system that enables a mobile robot to autonomously explore an unknown environment.
The robot can semantically map a 93m x 90m floor and update the semantic map once objects are moved in the environment.
arXiv Detail & Related papers (2024-09-23T19:25:03Z) - Mapping High-level Semantic Regions in Indoor Environments without
Object Recognition [50.624970503498226]
The present work proposes a method for semantic region mapping via embodied navigation in indoor environments.
To enable region identification, the method uses a vision-to-language model to provide scene information for mapping.
By projecting egocentric scene understanding into the global frame, the proposed method generates a semantic map as a distribution over possible region labels at each location.
arXiv Detail & Related papers (2024-03-11T18:09:50Z) - Object Goal Navigation with Recursive Implicit Maps [92.6347010295396]
We propose an implicit spatial map for object goal navigation.
Our method significantly outperforms the state of the art on the challenging MP3D dataset.
We deploy our model on a real robot and achieve encouraging object goal navigation results in real scenes.
arXiv Detail & Related papers (2023-08-10T14:21:33Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Object Goal Navigation Based on Semantics and RGB Ego View [9.702784248870522]
This paper presents an architecture and methodology to empower a service robot to navigate an indoor environment with semantic decision making, given RGB ego view.
The robot navigates based on GeoSem map - a relational combination of geometric and semantic map.
The presented approach was found to outperform human users in gamified evaluations with respect to average completion time.
arXiv Detail & Related papers (2022-10-20T19:23:08Z) - Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language
Navigation [87.52136927091712]
We address a practical yet challenging problem of training robot agents to navigate in an environment following a path described by some language instructions.
To achieve accurate and efficient navigation, it is critical to build a map that accurately represents both spatial location and the semantic information of the environment objects.
We propose a multi-granularity map, which contains both object fine-grained details (e.g., color, texture) and semantic classes, to represent objects more comprehensively.
arXiv Detail & Related papers (2022-10-14T04:23:27Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Language Understanding for Field and Service Robots in a Priori Unknown
Environments [29.16936249846063]
This paper provides a novel learning framework that allows field and service robots to interpret and execute natural language instructions.
We use language as a "sensor" -- inferring spatial, topological, and semantic information implicit in natural language utterances.
We incorporate this distribution in a probabilistic language grounding model and infer a distribution over a symbolic representation of the robot's action space.
arXiv Detail & Related papers (2021-05-21T15:13:05Z) - Semantics for Robotic Mapping, Perception and Interaction: A Survey [93.93587844202534]
Study of understanding dictates what does the world "mean" to a robot.
With humans and robots increasingly operating in the same world, the prospects of human-robot interaction also bring semantics into the picture.
Driven by need, as well as by enablers like increasing availability of training data and computational resources, semantics is a rapidly growing research area in robotics.
arXiv Detail & Related papers (2021-01-02T12:34:39Z) - Distributed Map Classification using Local Observations [17.225740154244942]
It is assumed that all robots have localized visual sensing capabilities and can exchange their information with neighboring robots.
We propose an offline learning structure that makes every robot capable of communicating with and fusing information from its neighbors.
arXiv Detail & Related papers (2020-12-18T19:35:10Z) - Learning Topometric Semantic Maps from Occupancy Grids [2.5234065536725963]
We propose a new approach for deriving such instance-based semantic maps purely from occupancy grids.
We employ a combination of deep learning techniques to detect, segment and extract door hypotheses from a random-sized map.
We evaluate our approach on several publicly available real-world data sets.
arXiv Detail & Related papers (2020-01-10T22:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.