Learning Emergent Behavior in Robot Swarms with NEAT
- URL: http://arxiv.org/abs/2309.14663v1
- Date: Tue, 26 Sep 2023 04:40:52 GMT
- Title: Learning Emergent Behavior in Robot Swarms with NEAT
- Authors: Pranav Rajbhandari, Donald Sofge
- Abstract summary: We present a method of training distributed robotic swarm algorithms to produce emergent behavior.
Inspired by the biological evolution of emergent behavior in animals, we use an evolutionary algorithm to train a 'population' of individual behaviors.
We perform experiments using simulations of the Georgia Tech Miniature Autonomous Blimps (GT-MABs) aerial robotics platforms conducted in the CoppeliaSim simulator.
- Score: 1.2315709793304113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When researching robot swarms, many studies observe complex group behavior
emerging from the individual agents' simple local actions. However, the task of
learning an individual policy to produce a desired emergent behavior remains a
challenging and largely unsolved problem. We present a method of training
distributed robotic swarm algorithms to produce emergent behavior. Inspired by
the biological evolution of emergent behavior in animals, we use an
evolutionary algorithm to train a 'population' of individual behaviors to
approximate a desired group behavior. We perform experiments using simulations
of the Georgia Tech Miniature Autonomous Blimps (GT-MABs) aerial robotics
platforms conducted in the CoppeliaSim simulator. Additionally, we test on
simulations of Anki Vector robots to display our algorithm's effectiveness on
various modes of actuation. We evaluate our algorithm on various tasks where a
somewhat complex group behavior is required for success. These tasks include an
Area Coverage task, a Surround Target task, and a Wall Climb task. We compare
behaviors evolved using our algorithm against 'designed policies', which we
create in order to exhibit the emergent behaviors we desire.
Related papers
- No-brainer: Morphological Computation driven Adaptive Behavior in Soft Robots [0.24554686192257422]
We show that intelligent behavior can be created without a separate and explicit brain for robot control.
Specifically, we show that adaptive and complex behavior can be created in voxel-based virtual soft robots by using simple reactive materials.
arXiv Detail & Related papers (2024-07-23T16:20:36Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Interactive Multi-Robot Flocking with Gesture Responsiveness and Musical Accompaniment [0.7659052547635159]
This work presents a compelling multi-robot task in which the main aim is to enthrall and interest.
In this task, the goal is for a human to be drawn to move alongside and participate in a dynamic, expressive robot flock.
Towards this aim, the research team created algorithms for robot movements and engaging interaction modes such as gestures and sound.
arXiv Detail & Related papers (2024-03-30T18:16:28Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Adapt On-the-Go: Behavior Modulation for Single-Life Robot Deployment [92.48012013825988]
We study the problem of adapting on-the-fly to novel scenarios during deployment.
Our approach, RObust Autonomous Modulation (ROAM), introduces a mechanism based on the perceived value of pre-trained behaviors.
We demonstrate that ROAM enables a robot to adapt rapidly to changes in dynamics both in simulation and on a real Go1 quadruped.
arXiv Detail & Related papers (2023-11-02T08:22:28Z) - Leveraging Human Feedback to Evolve and Discover Novel Emergent
Behaviors in Robot Swarms [14.404339094377319]
We seek to leverage human input to automatically discover a taxonomy of collective behaviors that can emerge from a particular multi-agent system.
Our proposed approach adapts to user preferences by learning a similarity space over swarm collective behaviors.
We test our approach in simulation on two robot capability models and show that our methods consistently discover a richer set of emergent behaviors than prior work.
arXiv Detail & Related papers (2023-04-25T15:18:06Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Inferring Versatile Behavior from Demonstrations by Matching Geometric
Descriptors [72.62423312645953]
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps.
Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting.
Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility.
arXiv Detail & Related papers (2022-10-17T16:42:59Z) - Investigation of Warrior Robots Behavior by Using Evolutionary
Algorithms [0.09668407688201358]
This kind of algorithms is inspired by nature that causes robots behaviors get resemble to collective behavior.
For robots which do not have any intelligence, we can define an algorithm and show the results by a simple simulation.
arXiv Detail & Related papers (2020-11-18T18:31:27Z) - Learning Behavior Trees with Genetic Programming in Unpredictable
Environments [7.839247285151348]
We show that genetic programming can be effectively used to learn the structure of a behavior tree.
We demonstrate that the learned BTs can solve the same task in a realistic simulator, reaching convergence without the need for task specifics.
arXiv Detail & Related papers (2020-11-06T09:28:23Z) - Thinking While Moving: Deep Reinforcement Learning with Concurrent
Control [122.49572467292293]
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system.
Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed.
arXiv Detail & Related papers (2020-04-13T17:49:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.