Realistic Curriculum Reinforcement Learning for Autonomous and Sustainable Marine Vessel Navigation
- URL: http://arxiv.org/abs/2601.10911v1
- Date: Thu, 15 Jan 2026 23:51:57 GMT
- Title: Realistic Curriculum Reinforcement Learning for Autonomous and Sustainable Marine Vessel Navigation
- Authors: Zhang Xiaocai, Xiao Zhe, Liang Maohan, Liu Tao, Li Haijiang, Zhang Wenbin,
- Abstract summary: We propose a Curriculum Reinforcement Learning framework integrated with a realistic, data-driven marine simulation environment.<n>Vessel fuel consumption is estimated using historical operational data and learning-based regression.<n>We design a lightweight, policy-based CRL agent with a comprehensive reward mechanism that considers safety, emissions, timeliness, and goal completion.
- Score: 0.5439021491474986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sustainability is becoming increasingly critical in the maritime transport, encompassing both environmental and social impacts, such as Greenhouse Gas (GHG) emissions and navigational safety. Traditional vessel navigation heavily relies on human experience, often lacking autonomy and emission awareness, and is prone to human errors that may compromise safety. In this paper, we propose a Curriculum Reinforcement Learning (CRL) framework integrated with a realistic, data-driven marine simulation environment and a machine learning-based fuel consumption prediction module. The simulation environment is constructed using real-world vessel movement data and enhanced with a Diffusion Model to simulate dynamic maritime conditions. Vessel fuel consumption is estimated using historical operational data and learning-based regression. The surrounding environment is represented as image-based inputs to capture spatial complexity. We design a lightweight, policy-based CRL agent with a comprehensive reward mechanism that considers safety, emissions, timeliness, and goal completion. This framework effectively handles complex tasks progressively while ensuring stable and efficient learning in continuous action spaces. We validate the proposed approach in a sea area of the Indian Ocean, demonstrating its efficacy in enabling sustainable and safe vessel navigation.
Related papers
- Digital Twin Supervised Reinforcement Learning Framework for Autonomous Underwater Navigation [0.0]
This article investigates issues through the case of the BlueROV2, an open platform widely used for scientific experimentation.<n>We propose a deep reinforcement learning approach based on the Proximal Policy Optimization (PPO) algorithm.<n>Results show that the PPO policy consistently outperforms DWA in highly cluttered environments.
arXiv Detail & Related papers (2025-12-11T18:52:42Z) - IndustryNav: Exploring Spatial Reasoning of Embodied Agents in Dynamic Industrial Navigation [56.43007596544299]
IndustryNav is the first dynamic industrial navigation benchmark for active spatial reasoning.<n>A study of nine state-of-the-art Visual Large Language Models reveals that closed-source models maintain a consistent advantage.
arXiv Detail & Related papers (2025-11-21T16:48:49Z) - From Seeing to Experiencing: Scaling Navigation Foundation Models with Reinforcement Learning [59.88543114325153]
We introduce the Seeing-to-Experiencing framework to scale the capability of navigation foundation models with reinforcement learning.<n>S2E combines the strengths of pre-training on videos and post-training through RL.<n>We establish a comprehensive end-to-end evaluation benchmark, NavBench-GS, built on photorealistic 3DGS reconstructions of real-world scenes.
arXiv Detail & Related papers (2025-07-29T17:26:10Z) - Designing Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigation [55.02966123945644]
We propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms.<n>Our approach relies on probabilistic enumeration to identify unsafe regions of operation, which are then used to construct a safe CBF-based control layer.<n>These experiments demonstrate the ability of the proposed solution to correct unsafe actions while preserving efficient navigation behavior.
arXiv Detail & Related papers (2025-04-30T13:47:25Z) - Depth-Constrained ASV Navigation with Deep RL and Limited Sensing [43.785833390490446]
We propose a reinforcement learning framework for ASV navigation under depth constraints.<n>To enhance environmental awareness, we integrate GP regression into the RL framework.<n>We demonstrate effective sim-to-real transfer, ensuring that trained policies generalize well to real-world aquatic conditions.
arXiv Detail & Related papers (2025-04-25T10:56:56Z) - Evaluating Robustness of Reinforcement Learning Algorithms for Autonomous Shipping [2.9109581496560044]
This paper examines the robustness of benchmark deep reinforcement learning (RL) algorithms, implemented for inland waterway transport (IWT) within an autonomous shipping simulator.
We show that a model-free approach can achieve an adequate policy in the simulator, successfully navigating port environments never encountered during training.
arXiv Detail & Related papers (2024-11-07T17:55:07Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Aeolus Ocean -- A simulation environment for the autonomous
COLREG-compliant navigation of Unmanned Surface Vehicles using Deep
Reinforcement Learning and Maritime Object Detection [0.0]
navigational autonomy in unmanned surface vehicles (USVs) in the maritime sector can lead to safer waters as well as reduced operating costs.
We describe the novel development of a COLREG-compliant DRL-based collision avoidant navigational system with CV-based awareness in a realistic ocean simulation environment.
arXiv Detail & Related papers (2023-07-13T11:20:18Z) - Vessel-following model for inland waterways based on deep reinforcement
learning [0.0]
This study aims at investigating the feasibility of RL-based vehicle-following for complex vehicle dynamics and strong environmental disturbances.
We developed an inland waterways vessel-following model based on realistic vessel dynamics.
Our model demonstrated safe and comfortable driving in all scenarios, proving excellent generalization abilities.
arXiv Detail & Related papers (2022-07-07T12:19:03Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.