Developing Decentralised Resilience to Malicious Influence in Collective
Perception Problem
- URL: http://arxiv.org/abs/2211.03063v1
- Date: Sun, 6 Nov 2022 08:53:33 GMT
- Title: Developing Decentralised Resilience to Malicious Influence in Collective
Perception Problem
- Authors: Chris Wise, Aya Hussein, Heba El-Fiqi
- Abstract summary: In collective decision-making, designing algorithms that use only local information to effect swarm-level behaviour is a non-trivial problem.
We used machine learning techniques to teach swarm members to map their local perceptions of the environment to an optimal action.
We extended upon previous approaches by creating a curriculum that taught agents resilience to malicious influence.
- Score: 0.7734726150561088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In collective decision-making, designing algorithms that use only local
information to effect swarm-level behaviour is a non-trivial problem. We used
machine learning techniques to teach swarm members to map their local
perceptions of the environment to an optimal action. A curriculum inspired by
Machine Education approaches was designed to facilitate this learning process
and teach the members the skills required for optimal performance in the
collective perception problem. We extended upon previous approaches by creating
a curriculum that taught agents resilience to malicious influence. The
experimental results show that well-designed rules-based algorithms can produce
effective agents. When performing opinion fusion, we implemented decentralised
resilience by having agents dynamically weight received opinion. We found a
non-significant difference between constant and dynamic weights, suggesting
that momentum-based opinion fusion is perhaps already a resilience mechanism.
Related papers
- External Model Motivated Agents: Reinforcement Learning for Enhanced Environment Sampling [3.536024441537599]
Unlike reinforcement learning (RL) agents, humans remain capable multitaskers in changing environments.
We propose an agent influence framework for RL agents to improve the adaptation efficiency of external models in changing environments.
Our results show that our method outperforms the baselines in terms of external model adaptation on metrics that measure both efficiency and performance.
arXiv Detail & Related papers (2024-06-28T23:31:22Z) - The Role of Learning Algorithms in Collective Action [8.955918346078935]
We show that the effective size and success of a collective are highly dependent on the properties of the learning algorithm.
This highlights the necessity of taking the learning algorithm into account when studying the impact of collective action in machine learning.
arXiv Detail & Related papers (2024-05-10T16:36:59Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Imitation Learning based Alternative Multi-Agent Proximal Policy
Optimization for Well-Formed Swarm-Oriented Pursuit Avoidance [15.498559530889839]
In this paper, we put forward a decentralized learning based Alternative Multi-Agent Proximal Policy Optimization (IA-MAPPO) algorithm to execute the pursuit avoidance task in well-formed swarm.
We utilize imitation learning to decentralize the formation controller, so as to reduce the communication overheads and enhance the scalability.
The simulation results validate the effectiveness of IA-MAPPO and extensive ablation experiments further show the performance comparable to a centralized solution with significant decrease in communication overheads.
arXiv Detail & Related papers (2023-11-06T06:58:16Z) - Decentralized Adversarial Training over Graphs [55.28669771020857]
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
arXiv Detail & Related papers (2023-03-23T15:05:16Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Resilient robot teams: a review integrating decentralised control,
change-detection, and learning [10.312968200748116]
This paper reviews opportunities and challenges for decentralised control, change-detection, and learning in the context of resilient robot teams.
Recent findings: Exogenous fault detection methods can provide a generic detection or a specific diagnosis with a recovery solution.
Resilient methods for decentralised control have been developed in learning perception-action-communication loops, multi-agent reinforcement learning, embodied evolution, offline evolution with online adaptation, explicit task allocation, and stigmergy in swarm robotics.
arXiv Detail & Related papers (2022-04-21T12:51:27Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Distributed Bayesian Online Learning for Cooperative Manipulation [9.582645137247667]
We propose a novel distributed learning framework for the exemplary task of cooperative manipulation using Bayesian principles.
Using only local state information each agent obtains an estimate of the object dynamics and grasp kinematics.
Each estimate of the object dynamics and grasp kinematics is accompanied by a measure of uncertainty, which allows to guarantee a bounded prediction error with high probability.
arXiv Detail & Related papers (2021-04-09T13:03:09Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.