Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and
Task Allocation for Exploration and Navigation in Unknown Environments
- URL: http://arxiv.org/abs/2312.16606v1
- Date: Wed, 27 Dec 2023 15:13:56 GMT
- Title: Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and
Task Allocation for Exploration and Navigation in Unknown Environments
- Authors: Lavanya Ratnabala, Robinroy Peter, E.Y.A. Charles
- Abstract summary: The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals.
The paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method.
A task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This research paper addresses the challenges of exploration and navigation in
unknown environments from an evolutionary swarm robotics perspective. Path
formation plays a crucial role in enabling cooperative swarm robots to
accomplish these tasks. The paper presents a method called the sub-goal-based
path formation, which establishes a path between two different locations by
exploiting visually connected sub-goals. Simulation experiments conducted in
the Argos simulator demonstrate the successful formation of paths in the
majority of trials.
Furthermore, the paper tackles the problem of inter-collision (traffic) among
a large number of robots engaged in path formation, which negatively impacts
the performance of the sub-goal-based method. To mitigate this issue, a task
allocation strategy is proposed, leveraging local communication protocols and
light signal-based communication. The strategy evaluates the distance between
points and determines the required number of robots for the path formation
task, reducing unwanted exploration and traffic congestion. The performance of
the sub-goal-based path formation and task allocation strategy is evaluated by
comparing path length, time, and resource reduction against the A* algorithm.
The simulation experiments demonstrate promising results, showcasing the
scalability, robustness, and fault tolerance characteristics of the proposed
approach.
Related papers
- Online Context Learning for Socially-compliant Navigation [49.609656402450746]
This letter introduces an online context learning method that aims to empower robots to adapt to new social environments online.
Experiments using a community-wide simulator show that our method outperforms the state-of-the-art ones.
arXiv Detail & Related papers (2024-06-17T12:59:13Z) - Mission-driven Exploration for Accelerated Deep Reinforcement Learning
with Temporal Logic Task Specifications [11.812602599752294]
We consider robots with unknown dynamics operating in environments with unknown structure.
Our goal is to synthesize a control policy that maximizes the probability of satisfying an automaton-encoded task.
We propose a novel DRL algorithm, which has the capability to learn control policies at a notably faster rate compared to similar methods.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Comparing Active Learning Performance Driven by Gaussian Processes or
Bayesian Neural Networks for Constrained Trajectory Exploration [0.0]
Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands may cause undue delays in mission fulfillment.
An autonomous robot encoded with a scientific objective and an exploration strategy incurs no communication delays and can fulfill missions more quickly.
Active learning algorithms offer this capability of intelligent exploration, but the underlying model structure varies the performance of the active learning algorithm in accurately forming an understanding of the environment.
arXiv Detail & Related papers (2023-09-28T02:45:14Z) - Multi-Robot Path Planning Combining Heuristics and Multi-Agent
Reinforcement Learning [0.0]
In the movement process, robots need to avoid collisions with other moving robots while minimizing their travel distance.
Previous methods for this problem either continuously replan paths using search methods to avoid conflicts or choose appropriate collision avoidance strategies based on learning approaches.
We propose a path planning method, MAPPOHR, which combines a search, empirical rules, and multi-agent reinforcement learning.
arXiv Detail & Related papers (2023-06-02T05:07:37Z) - Robot path planning using deep reinforcement learning [0.0]
Reinforcement learning methods offer an alternative to map-free navigation tasks.
Deep reinforcement learning agents are implemented for both the obstacle avoidance and the goal-oriented navigation task.
An analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
arXiv Detail & Related papers (2023-02-17T20:08:59Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Language-guided Navigation via Cross-Modal Grounding and Alternate
Adversarial Learning [66.9937776799536]
The emerging vision-and-language navigation (VLN) problem aims at learning to navigate an agent to the target location in unseen photo-realistic environments.
The main challenges of VLN arise mainly from two aspects: first, the agent needs to attend to the meaningful paragraphs of the language instruction corresponding to the dynamically-varying visual environments.
We propose a cross-modal grounding module to equip the agent with a better ability to track the correspondence between the textual and visual modalities.
arXiv Detail & Related papers (2020-11-22T09:13:46Z) - Reward Conditioned Neural Movement Primitives for Population Based
Variational Policy Optimization [4.559353193715442]
This paper studies the reward based policy exploration problem in a supervised learning approach.
We show that our method provides stable learning progress and significant sample efficiency compared to a number of state-of-the-art robotic reinforcement learning methods.
arXiv Detail & Related papers (2020-11-09T09:53:37Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.