Contextually Aware Intelligent Control Agents for Heterogeneous Swarms
- URL: http://arxiv.org/abs/2211.12560v1
- Date: Tue, 22 Nov 2022 20:25:59 GMT
- Title: Contextually Aware Intelligent Control Agents for Heterogeneous Swarms
- Authors: Adam Hepworth, Aya Hussein, Darryn Reid, Hussein Abbass
- Abstract summary: An emerging challenge in swarm shepherding research is to design effective and efficient artificial intelligence algorithms.
We propose a methodology to design a context-aware swarm-control intelligent agent.
We demonstrate successful shepherding in both homogeneous and heterogeneous swarms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: An emerging challenge in swarm shepherding research is to design effective
and efficient artificial intelligence algorithms that maintain a
low-computational ceiling while increasing the swarm's abilities to operate in
diverse contexts. We propose a methodology to design a context-aware
swarm-control intelligent agent. The intelligent control agent (shepherd) first
uses swarm metrics to recognise the type of swarm it interacts with to then
select a suitable parameterisation from its behavioural library for that
particular swarm type. The design principle of our methodology is to increase
the situation awareness (i.e. information contents) of the control agent
without sacrificing the low-computational cost necessary for efficient swarm
control. We demonstrate successful shepherding in both homogeneous and
heterogeneous swarms.
Related papers
- AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases [73.04652687616286]
We propose AgentPoison, the first backdoor attack targeting generic and RAG-based LLM agents by poisoning their long-term memory or RAG knowledge base.
Unlike conventional backdoor attacks, AgentPoison requires no additional model training or fine-tuning.
On each agent, AgentPoison achieves an average attack success rate higher than 80% with minimal impact on benign performance.
arXiv Detail & Related papers (2024-07-17T17:59:47Z) - Surprise-Adaptive Intrinsic Motivation for Unsupervised Reinforcement Learning [6.937243101289336]
entropy-minimizing and entropy-maximizing objectives for unsupervised reinforcement learning (RL) have been shown to be effective in different environments.
We propose an agent that can adapt its objective online, depending on the entropy conditions by framing the choice as a multi-armed bandit problem.
We demonstrate that such agents can learn to control entropy and exhibit emergent behaviors in both high- and low-entropy regimes.
arXiv Detail & Related papers (2024-05-27T14:58:24Z) - Investigate-Consolidate-Exploit: A General Strategy for Inter-Task Agent
Self-Evolution [92.84441068115517]
Investigate-Consolidate-Exploit (ICE) is a novel strategy for enhancing the adaptability and flexibility of AI agents.
ICE promotes the transfer of knowledge between tasks for genuine self-evolution.
Our experiments on the XAgent framework demonstrate ICE's effectiveness, reducing API calls by as much as 80%.
arXiv Detail & Related papers (2024-01-25T07:47:49Z) - Leveraging Human Feedback to Evolve and Discover Novel Emergent
Behaviors in Robot Swarms [14.404339094377319]
We seek to leverage human input to automatically discover a taxonomy of collective behaviors that can emerge from a particular multi-agent system.
Our proposed approach adapts to user preferences by learning a similarity space over swarm collective behaviors.
We test our approach in simulation on two robot capability models and show that our methods consistently discover a richer set of emergent behaviors than prior work.
arXiv Detail & Related papers (2023-04-25T15:18:06Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Scalable Task-Driven Robotic Swarm Control via Collision Avoidance and
Learning Mean-Field Control [23.494528616672024]
We use state-of-the-art mean-field control techniques to convert many-agent swarm control into classical single-agent control of distributions.
Here, we combine collision avoidance and learning of mean-field control into a unified framework for tractably designing intelligent robotic swarm behavior.
arXiv Detail & Related papers (2022-09-15T16:15:04Z) - Understandable Controller Extraction from Video Observations of Swarms [0.0]
Swarm behavior emerges from the local interaction of agents and their environment often encoded as simple rules.
We develop a method to automatically extract understandable swarm controllers from video demonstrations.
arXiv Detail & Related papers (2022-09-02T15:28:28Z) - Collective motion emerging from evolving swarm controllers in different
environments using gradient following task [2.7402733069181]
We consider a challenging task where robots with limited sensing and communication abilities must follow the gradient of an environmental feature.
We use Differential Evolution to evolve a neural network controller for simulated Thymio II robots.
Experiments confirm the feasibility of our approach, the evolved robot controllers induced swarm behaviour that solved the task.
arXiv Detail & Related papers (2022-03-22T10:08:50Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning [92.05556163518999]
MARL exacerbates matters by imposing various constraints on communication and observability.
For value-based methods, it poses challenges in accurately representing the optimal value function.
For policy gradient methods, it makes training the critic difficult and exacerbates the problem of the lagging critic.
We show that from a learning theory perspective, both problems can be addressed by accurately representing the associated action-value function.
arXiv Detail & Related papers (2021-05-31T23:08:05Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.