Understandable Controller Extraction from Video Observations of Swarms
- URL: http://arxiv.org/abs/2209.01118v1
- Date: Fri, 2 Sep 2022 15:28:28 GMT
- Title: Understandable Controller Extraction from Video Observations of Swarms
- Authors: Khulud Alharthi, Zahraa S Abdallah, Sabine Hauert
- Abstract summary: Swarm behavior emerges from the local interaction of agents and their environment often encoded as simple rules.
We develop a method to automatically extract understandable swarm controllers from video demonstrations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Swarm behavior emerges from the local interaction of agents and their
environment often encoded as simple rules. Extracting the rules by watching a
video of the overall swarm behavior could help us study and control swarm
behavior in nature, or artificial swarms that have been designed by external
actors. It could also serve as a new source of inspiration for swarm robotics.
Yet extracting such rules is challenging as there is often no visible link
between the emergent properties of the swarm and their local interactions. To
this end, we develop a method to automatically extract understandable swarm
controllers from video demonstrations. The method uses evolutionary algorithms
driven by a fitness function that compares eight high-level swarm metrics. The
method is able to extract many controllers (behavior trees) in a simple
collective movement task. We then provide a qualitative analysis of behaviors
that resulted in different trees, but similar behaviors. This provides the
first steps toward automatic extraction of swarm controllers based on
observations.
Related papers
- Spiking Neural Networks as a Controller for Emergent Swarm Agents [8.816729033097868]
Existing research explores the possible emergent behaviors in swarms of robots with only a binary sensor and a simple but hand-picked controller structure.
This paper investigates the feasibility of training spiking neural networks to find those local interaction rules that result in particular emergent behaviors.
arXiv Detail & Related papers (2024-10-21T16:41:35Z) - Multistep Inverse Is Not All You Need [87.62730694973696]
In real-world control settings, the observation space is often unnecessarily high-dimensional and subject to time-correlated noise.
It is therefore desirable to learn an encoder to map the observation space to a simpler space of control-relevant variables.
We propose a new algorithm, ACDF, which combines multistep-inverse prediction with a latent forward model.
arXiv Detail & Related papers (2024-03-18T16:36:01Z) - CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters [71.66218592749448]
We present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
Using imitation learning, CALM learns a representation of movement that captures the complexity of human motion, and enables direct control over character movements.
arXiv Detail & Related papers (2023-05-02T09:01:44Z) - Leveraging Human Feedback to Evolve and Discover Novel Emergent
Behaviors in Robot Swarms [14.404339094377319]
We seek to leverage human input to automatically discover a taxonomy of collective behaviors that can emerge from a particular multi-agent system.
Our proposed approach adapts to user preferences by learning a similarity space over swarm collective behaviors.
We test our approach in simulation on two robot capability models and show that our methods consistently discover a richer set of emergent behaviors than prior work.
arXiv Detail & Related papers (2023-04-25T15:18:06Z) - Program Generation from Diverse Video Demonstrations [49.202289347899836]
Generalising over multiple observations is a task that has historically presented difficulties for machines to grasp.
We propose a model that can extract general rules from video demonstrations by simultaneously performing summarisation and translation.
arXiv Detail & Related papers (2023-02-01T01:51:45Z) - Contextually Aware Intelligent Control Agents for Heterogeneous Swarms [0.0]
An emerging challenge in swarm shepherding research is to design effective and efficient artificial intelligence algorithms.
We propose a methodology to design a context-aware swarm-control intelligent agent.
We demonstrate successful shepherding in both homogeneous and heterogeneous swarms.
arXiv Detail & Related papers (2022-11-22T20:25:59Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Collective motion emerging from evolving swarm controllers in different
environments using gradient following task [2.7402733069181]
We consider a challenging task where robots with limited sensing and communication abilities must follow the gradient of an environmental feature.
We use Differential Evolution to evolve a neural network controller for simulated Thymio II robots.
Experiments confirm the feasibility of our approach, the evolved robot controllers induced swarm behaviour that solved the task.
arXiv Detail & Related papers (2022-03-22T10:08:50Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - A model-based framework for learning transparent swarm behaviors [6.310689648471231]
This paper proposes a model-based framework to design understandable and verifiable behaviors for swarms of robots.
The framework is tested on four case studies, featuring aggregation and foraging tasks.
arXiv Detail & Related papers (2021-03-09T10:45:57Z) - State-Only Imitation Learning for Dexterous Manipulation [63.03621861920732]
In this paper, we explore state-only imitation learning.
We train an inverse dynamics model and use it to predict actions for state-only demonstrations.
Our method performs on par with state-action approaches and considerably outperforms RL alone.
arXiv Detail & Related papers (2020-04-07T17:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.