Autonomous Port Navigation With Ranging Sensors Using Model-Based
Reinforcement Learning
- URL: http://arxiv.org/abs/2312.05257v1
- Date: Fri, 17 Nov 2023 14:22:40 GMT
- Title: Autonomous Port Navigation With Ranging Sensors Using Model-Based
Reinforcement Learning
- Authors: Siemen Herremans, Ali Anwar, Arne Troch, Ian Ravijts, Maarten
Vangeneugden, Siegfried Mercelis, Peter Hellinckx
- Abstract summary: This research proposes a navigational algorithm which can navigate an inland vessel in a wide variety of complex port scenarios.
The proposed methodology is based on a machine learning approach that has recently set benchmark results in various domains.
Results show that our approach outperforms the commonly used dynamic window approach and a benchmark model-free reinforcement learning algorithm.
- Score: 2.3439981951927296
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Autonomous shipping has recently gained much interest in the research
community. However, little research focuses on inland - and port navigation,
even though this is identified by countries such as Belgium and the Netherlands
as an essential step towards a sustainable future. These environments pose
unique challenges, since they can contain dynamic obstacles that do not
broadcast their location, such as small vessels, kayaks or buoys. Therefore,
this research proposes a navigational algorithm which can navigate an inland
vessel in a wide variety of complex port scenarios using ranging sensors to
observe the environment. The proposed methodology is based on a machine
learning approach that has recently set benchmark results in various domains:
model-based reinforcement learning. By randomizing the port environments during
training, the trained model can navigate in scenarios that it never encountered
during training. Furthermore, results show that our approach outperforms the
commonly used dynamic window approach and a benchmark model-free reinforcement
learning algorithm. This work is therefore a significant step towards vessels
that can navigate autonomously in complex port scenarios.
Related papers
- NavigateDiff: Visual Predictors are Zero-Shot Navigation Assistants [24.689242976554482]
Navigating unfamiliar environments presents significant challenges for household robots.
Existing reinforcement learning methods cannot be directly transferred to new environments.
We try to transfer the logical knowledge and the generalization ability of pre-trained foundation models to zero-shot navigation.
arXiv Detail & Related papers (2025-02-19T17:27:47Z) - Evaluating Robustness of Reinforcement Learning Algorithms for Autonomous Shipping [2.9109581496560044]
This paper examines the robustness of benchmark deep reinforcement learning (RL) algorithms, implemented for inland waterway transport (IWT) within an autonomous shipping simulator.
We show that a model-free approach can achieve an adequate policy in the simulator, successfully navigating port environments never encountered during training.
arXiv Detail & Related papers (2024-11-07T17:55:07Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning
Disentangled Reasoning [101.56342075720588]
Vision-and-Language Navigation (VLN), as a crucial research problem of Embodied AI, requires an embodied agent to navigate through complex 3D environments following natural language instructions.
Recent research has highlighted the promising capacity of large language models (LLMs) in VLN by improving navigational reasoning accuracy and interpretability.
This paper introduces a novel strategy called Navigational Chain-of-Thought (NavCoT), where we fulfill parameter-efficient in-domain training to enable self-guided navigational decision.
arXiv Detail & Related papers (2024-03-12T07:27:02Z) - Interactive Semantic Map Representation for Skill-based Visual Object
Navigation [43.71312386938849]
This paper introduces a new representation of a scene semantic map formed during the embodied agent interaction with the indoor environment.
We have implemented this representation into a full-fledged navigation approach called SkillTron.
The proposed approach makes it possible to form both intermediate goals for robot exploration and the final goal for object navigation.
arXiv Detail & Related papers (2023-11-07T16:30:12Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - Multi-agent navigation based on deep reinforcement learning and
traditional pathfinding algorithm [0.0]
We develop a new framework for multi-agent collision avoidance problem.
The framework combined traditional pathfinding algorithm and reinforcement learning.
In our approach, the agents learn whether to be navigated or to take simple actions to avoid their partners.
arXiv Detail & Related papers (2020-12-05T08:56:58Z) - Environment-agnostic Multitask Learning for Natural Language Grounded
Navigation [88.69873520186017]
We introduce a multitask navigation model that can be seamlessly trained on Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks.
Experiments show that environment-agnostic multitask learning significantly reduces the performance gap between seen and unseen environments.
arXiv Detail & Related papers (2020-03-01T09:06:31Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.