Variational Autoencoders for exteroceptive perception in reinforcement learning-based collision avoidance
- URL: http://arxiv.org/abs/2404.00623v1
- Date: Sun, 31 Mar 2024 09:25:28 GMT
- Title: Variational Autoencoders for exteroceptive perception in reinforcement learning-based collision avoidance
- Authors: Thomas Nakken Larsen, Eirik Runde Barlaug, Adil Rasheed,
- Abstract summary: Deep Reinforcement Learning (DRL) has emerged as a promising control framework.
Current DRL algorithms require disproportionally large computational resources to find near-optimal policies.
This paper presents a comprehensive exploration of our proposed approach in maritime control systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern control systems are increasingly turning to machine learning algorithms to augment their performance and adaptability. Within this context, Deep Reinforcement Learning (DRL) has emerged as a promising control framework, particularly in the domain of marine transportation. Its potential for autonomous marine applications lies in its ability to seamlessly combine path-following and collision avoidance with an arbitrary number of obstacles. However, current DRL algorithms require disproportionally large computational resources to find near-optimal policies compared to the posed control problem when the searchable parameter space becomes large. To combat this, our work delves into the application of Variational AutoEncoders (VAEs) to acquire a generalized, low-dimensional latent encoding of a high-fidelity range-finding sensor, which serves as the exteroceptive input to a DRL agent. The agent's performance, encompassing path-following and collision avoidance, is systematically tested and evaluated within a stochastic simulation environment, presenting a comprehensive exploration of our proposed approach in maritime control systems.
Related papers
- Collision Avoidance Verification of Multiagent Systems with Learned Policies [9.550601011551024]
This paper presents a backward reachability-based approach for verifying the collision avoidance properties of Multi-Agent Feedback Loops (MA-NFLs)
We account for many uncertainties, making it well aligned with real-world scenarios.
We demonstrate the proposed algorithm can verify collision-free properties of a MA-NFL with agents trained to imitate a collision avoidance algorithm.
arXiv Detail & Related papers (2024-03-05T20:36:26Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - RLaGA: A Reinforcement Learning Augmented Genetic Algorithm For
Searching Real and Diverse Marker-Based Landing Violations [0.7709288517758135]
It's important to fully test auto-landing systems before deploying them in the real-world to ensure safety.
This paper proposes RLaGA, a reinforcement learning (RL) augmented search-based testing framework.
Our method generates up to 22.19% more violation cases and nearly doubles the diversity of generated violation cases.
arXiv Detail & Related papers (2023-10-11T10:54:01Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Anomaly Detection Based on Selection and Weighting in Latent Space [73.01328671569759]
We propose a novel selection-and-weighting-based anomaly detection framework called SWAD.
Experiments on both benchmark and real-world datasets have shown the effectiveness and superiority of SWAD.
arXiv Detail & Related papers (2021-03-08T10:56:38Z) - Collision-Free Flocking with a Dynamic Squad of Fixed-Wing UAVs Using
Deep Reinforcement Learning [2.555094847583209]
We deal with the decentralized leader-follower flocking control problem through deep reinforcement learning (DRL)
We propose a novel reinforcement learning algorithm CACER-II for training a shared control policy for all the followers.
As a result, the variable-length system state can be encoded into a fixed-length embedding vector, which makes the learned DRL policies independent with the number or the order of followers.
arXiv Detail & Related papers (2021-01-20T11:23:35Z) - Deep Reinforcement Learning Controller for 3D Path-following and
Collision Avoidance by Autonomous Underwater Vehicles [0.0]
In complex systems, such as autonomous underwater vehicles, decision making becomes non-trivial.
We propose a solution using state-of-the-art Deep Reinforcement Learning (DRL) techniques.
Our results demonstrate the viability of DRL in path-following and avoiding collisions toward achieving human-level decision making in autonomous vehicle systems.
arXiv Detail & Related papers (2020-06-17T11:54:53Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.