Comparing Active Learning Performance Driven by Gaussian Processes or
Bayesian Neural Networks for Constrained Trajectory Exploration
- URL: http://arxiv.org/abs/2309.16114v1
- Date: Thu, 28 Sep 2023 02:45:14 GMT
- Title: Comparing Active Learning Performance Driven by Gaussian Processes or
Bayesian Neural Networks for Constrained Trajectory Exploration
- Authors: Sapphira Akins, Frances Zhu
- Abstract summary: Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands may cause undue delays in mission fulfillment.
An autonomous robot encoded with a scientific objective and an exploration strategy incurs no communication delays and can fulfill missions more quickly.
Active learning algorithms offer this capability of intelligent exploration, but the underlying model structure varies the performance of the active learning algorithm in accurately forming an understanding of the environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robots with increasing autonomy progress our space exploration capabilities,
particularly for in-situ exploration and sampling to stand in for human
explorers. Currently, humans drive robots to meet scientific objectives, but
depending on the robot's location, the exchange of information and driving
commands between the human operator and robot may cause undue delays in mission
fulfillment. An autonomous robot encoded with a scientific objective and an
exploration strategy incurs no communication delays and can fulfill missions
more quickly. Active learning algorithms offer this capability of intelligent
exploration, but the underlying model structure varies the performance of the
active learning algorithm in accurately forming an understanding of the
environment. In this paper, we investigate the performance differences between
active learning algorithms driven by Gaussian processes or Bayesian neural
networks for exploration strategies encoded on agents that are constrained in
their trajectories, like planetary surface rovers. These two active learning
strategies were tested in a simulation environment against science-blind
strategies to predict the spatial distribution of a variable of interest along
multiple datasets. The performance metrics of interest are model accuracy in
root mean squared (RMS) error, training time, model convergence, total distance
traveled until convergence, and total samples until convergence. Active
learning strategies encoded with Gaussian processes require less computation to
train, converge to an accurate model more quickly, and propose trajectories of
shorter distance, except in a few complex environments in which Bayesian neural
networks achieve a more accurate model in the large data regime due to their
more expressive functional bases. The paper concludes with advice on when and
how to implement either exploration strategy for future space missions.
Related papers
- Multi-Agent Dynamic Relational Reasoning for Social Robot Navigation [50.01551945190676]
Social robot navigation can be helpful in various contexts of daily life but requires safe human-robot interactions and efficient trajectory planning.
We propose a systematic relational reasoning approach with explicit inference of the underlying dynamically evolving relational structures.
We demonstrate its effectiveness for multi-agent trajectory prediction and social robot navigation.
arXiv Detail & Related papers (2024-01-22T18:58:22Z) - Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and
Task Allocation for Exploration and Navigation in Unknown Environments [0.0]
The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals.
The paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method.
A task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication.
arXiv Detail & Related papers (2023-12-27T15:13:56Z) - Mission-driven Exploration for Accelerated Deep Reinforcement Learning
with Temporal Logic Task Specifications [11.812602599752294]
We consider robots with unknown dynamics operating in environments with unknown structure.
Our goal is to synthesize a control policy that maximizes the probability of satisfying an automaton-encoded task.
We propose a novel DRL algorithm, which has the capability to learn control policies at a notably faster rate compared to similar methods.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics [11.946807588018595]
This paper presents a unified model-based reinforcement learning framework that bridges active exploration and uncertainty-aware deployment.
The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC.
We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
arXiv Detail & Related papers (2023-05-20T17:20:12Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Graph Neural Networks for Decentralized Multi-Robot Submodular Action
Selection [101.38634057635373]
We focus on applications where robots are required to jointly select actions to maximize team submodular objectives.
We propose a general-purpose learning architecture towards submodular at scale, with decentralized communications.
We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots.
arXiv Detail & Related papers (2021-05-18T15:32:07Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning
on Graphs [5.043563227694137]
We consider an autonomous exploration problem in which a range-sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time.
We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space.
arXiv Detail & Related papers (2020-07-24T16:50:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.