Generation of Human-aware Navigation Maps using Graph Neural Networks
- URL: http://arxiv.org/abs/2011.05180v1
- Date: Tue, 10 Nov 2020 15:32:14 GMT
- Title: Generation of Human-aware Navigation Maps using Graph Neural Networks
- Authors: Daniel Rodriguez-Criado and Pilar Bachiller and Luis J. Manso
- Abstract summary: The paper presents a machine learning-based framework that bootstraps existing one-dimensional datasets to generate a cost map dataset and a model combining Graph Neural Network and Convolutional Neural Network layers to produce cost maps for human-aware navigation in real-time.
The applications of the proposed framework are not limited to human-aware navigation, it could be applied to other fields where map generation is needed.
- Score: 1.4502611532302039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Minimising the discomfort caused by robots when navigating in social
situations is crucial for them to be accepted. The paper presents a machine
learning-based framework that bootstraps existing one-dimensional datasets to
generate a cost map dataset and a model combining Graph Neural Network and
Convolutional Neural Network layers to produce cost maps for human-aware
navigation in real-time. The proposed framework is evaluated against the
original one-dimensional dataset and in simulated navigation tasks. The results
outperform similar state-of-the-art-methods considering the accuracy on the
dataset and the navigation metrics used. The applications of the proposed
framework are not limited to human-aware navigation, it could be applied to
other fields where map generation is needed.
Related papers
- Map Imagination Like Blind Humans: Group Diffusion Model for Robotic Map Generation [0.9898607871253774]
We propose a group diffusion model (GDM) based architecture for robots to generate point cloud maps with very limited input information.
Inspired from the blind humans' natural capability of imagining or generating mental maps, the proposed method can generate maps without visual perception data or depth data.
Experiments on public datasets are conducted, and the results indicate that our method can generate reasonable maps solely based on path data, and produce even more refined maps upon incorporating exiguous LiDAR data.
arXiv Detail & Related papers (2024-12-22T07:54:21Z) - Radio Map Estimation -- An Open Dataset with Directive Transmitter
Antennas and Initial Experiments [49.61405888107356]
We release a dataset of simulated path loss radio maps together with realistic city maps from real-world locations and aerial images from open datasources.
Initial experiments regarding model architectures, input feature design and estimation of radio maps from aerial images are presented.
arXiv Detail & Related papers (2024-01-12T14:56:45Z) - Interactive Semantic Map Representation for Skill-based Visual Object
Navigation [43.71312386938849]
This paper introduces a new representation of a scene semantic map formed during the embodied agent interaction with the indoor environment.
We have implemented this representation into a full-fledged navigation approach called SkillTron.
The proposed approach makes it possible to form both intermediate goals for robot exploration and the final goal for object navigation.
arXiv Detail & Related papers (2023-11-07T16:30:12Z) - Object Goal Navigation with Recursive Implicit Maps [92.6347010295396]
We propose an implicit spatial map for object goal navigation.
Our method significantly outperforms the state of the art on the challenging MP3D dataset.
We deploy our model on a real robot and achieve encouraging object goal navigation results in real scenes.
arXiv Detail & Related papers (2023-08-10T14:21:33Z) - Learning Navigational Visual Representations with Semantic Map
Supervision [85.91625020847358]
We propose a navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps.
Ego$2$-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation.
arXiv Detail & Related papers (2023-07-23T14:01:05Z) - Learning to Predict Navigational Patterns from Partial Observations [63.04492958425066]
This paper presents the first self-supervised learning (SSL) method for learning to infer navigational patterns in real-world environments from partial observations only.
We demonstrate how to infer global navigational patterns by fitting a maximum likelihood graph to the DSLP field.
Experiments show that our SSL model outperforms two SOTA supervised lane graph prediction models on the nuScenes dataset.
arXiv Detail & Related papers (2023-04-26T02:08:46Z) - PONI: Potential Functions for ObjectGoal Navigation with
Interaction-free Learning [125.22462763376993]
We propose Potential functions for ObjectGoal Navigation with Interaction-free learning (PONI)
PONI disentangles the skills of where to look?' for an object and how to navigate to (x, y)?'
arXiv Detail & Related papers (2022-01-25T01:07:32Z) - Lifelong Topological Visual Navigation [16.41858724205884]
We propose a learning-based visual navigation method with graph update strategies that improve lifelong navigation performance over time.
We take inspiration from sampling-based planning algorithms to build image-based topological graphs, resulting in sparser graphs yet with higher navigation performance compared to baseline methods.
Unlike controllers that learn from fixed training environments, we show that our model can be finetuned using a relatively small dataset from the real-world environment where the robot is deployed.
arXiv Detail & Related papers (2021-10-16T06:16:14Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.