Resilient robot teams: a review integrating decentralised control,
change-detection, and learning
- URL: http://arxiv.org/abs/2204.10063v1
- Date: Thu, 21 Apr 2022 12:51:27 GMT
- Title: Resilient robot teams: a review integrating decentralised control,
change-detection, and learning
- Authors: David M. Bossens, Sarvapali Ramchurn, Danesh Tarapore
- Abstract summary: This paper reviews opportunities and challenges for decentralised control, change-detection, and learning in the context of resilient robot teams.
Recent findings: Exogenous fault detection methods can provide a generic detection or a specific diagnosis with a recovery solution.
Resilient methods for decentralised control have been developed in learning perception-action-communication loops, multi-agent reinforcement learning, embodied evolution, offline evolution with online adaptation, explicit task allocation, and stigmergy in swarm robotics.
- Score: 10.312968200748116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose of review: This paper reviews opportunities and challenges for
decentralised control, change-detection, and learning in the context of
resilient robot teams.
Recent findings: Exogenous fault detection methods can provide a generic
detection or a specific diagnosis with a recovery solution. Robot teams can
perform active and distributed sensing for detecting changes in the
environment, including identifying and tracking dynamic anomalies, as well as
collaboratively mapping dynamic environments. Resilient methods for
decentralised control have been developed in learning
perception-action-communication loops, multi-agent reinforcement learning,
embodied evolution, offline evolution with online adaptation, explicit task
allocation, and stigmergy in swarm robotics.
Summary: Remaining challenges for resilient robot teams are integrating
change-detection and trial-and-error learning methods, obtaining reliable
performance evaluations under constrained evaluation time, improving the safety
of resilient robot teams, theoretical results demonstrating rapid adaptation to
given environmental perturbations, and designing realistic and compelling case
studies.
Related papers
- A Survey of Self-Evolving Agents: On Path to Artificial Super Intelligence [87.08051686357206]
Large Language Models (LLMs) have demonstrated strong capabilities but remain fundamentally static.<n>As LLMs are increasingly deployed in open-ended, interactive environments, this static nature has become a critical bottleneck.<n>This survey provides the first systematic and comprehensive review of self-evolving agents.
arXiv Detail & Related papers (2025-07-28T17:59:05Z) - Situationally-Aware Dynamics Learning [57.698553219660376]
We propose a novel framework for online learning of hidden state representations.<n>Our approach explicitly models the influence of unobserved parameters on both transition dynamics and reward structures.<n>Experiments in both simulation and real world reveal significant improvements in data efficiency, policy performance, and the emergence of safer, adaptive navigation strategies.
arXiv Detail & Related papers (2025-05-26T06:40:11Z) - Intelligent Sensing-to-Action for Robust Autonomy at the Edge: Opportunities and Challenges [19.390215975410406]
Autonomous edge computing in robotics, smart cities, and autonomous vehicles relies on seamless integration of sensing, processing, and actuation.
At its core is the sensing-to-action loop, which iteratively aligns sensor inputs with computational models to drive adaptive control strategies.
This article explores how proactive, context-aware sensing-to-action and action-to-sensing adaptations can enhance efficiency.
arXiv Detail & Related papers (2025-02-04T20:13:58Z) - Cooperative Resilience in Artificial Intelligence Multiagent Systems [2.0608564715600273]
This paper proposes a clear definition of cooperative resilience' and a methodology for its quantitative measurement.
The results highlight the crucial role of resilience metrics in analyzing how the collective system prepares for, resists, recovers from, sustains well-being, and transforms in the face of disruptions.
arXiv Detail & Related papers (2024-09-20T03:28:48Z) - Network bottlenecks and task structure control the evolution of interpretable learning rules in a foraging agent [0.0]
We study meta-learning via evolutionary optimization of simple reward-modulated plasticity rules in embodied agents.
We show that unconstrained meta-learning leads to the emergence of diverse plasticity rules.
Our findings indicate that the meta-learning of plasticity rules is very sensitive to various parameters, with this sensitivity possibly reflected in the learning rules found in biological networks.
arXiv Detail & Related papers (2024-03-20T14:57:02Z) - Adaptive Control Strategy for Quadruped Robots in Actuator Degradation
Scenarios [16.148061952978246]
This paper introduces a teacher-student framework rooted in reinforcement learning, named Actuator Degradation Adaptation Transformer (ADAPT)
ADAPT produces a unified control strategy, enabling the robot to sustain its locomotion and perform tasks despite sudden joint actuator faults.
Empirical evaluations on the Unitree A1 platform validate the deployability and effectiveness of Adapt on real-world quadruped robots.
arXiv Detail & Related papers (2023-12-29T14:04:45Z) - Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics [11.946807588018595]
This paper presents a unified model-based reinforcement learning framework that bridges active exploration and uncertainty-aware deployment.
The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC.
We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
arXiv Detail & Related papers (2023-05-20T17:20:12Z) - Developing Decentralised Resilience to Malicious Influence in Collective
Perception Problem [0.7734726150561088]
In collective decision-making, designing algorithms that use only local information to effect swarm-level behaviour is a non-trivial problem.
We used machine learning techniques to teach swarm members to map their local perceptions of the environment to an optimal action.
We extended upon previous approaches by creating a curriculum that taught agents resilience to malicious influence.
arXiv Detail & Related papers (2022-11-06T08:53:33Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - L2Explorer: A Lifelong Reinforcement Learning Assessment Environment [49.40779372040652]
Reinforcement learning solutions tend to generalize poorly when exposed to new tasks outside of the data distribution they are trained on.
We introduce a framework for continual reinforcement-learning development and assessment using Lifelong Learning Explorer (L2Explorer)
L2Explorer is a new, Unity-based, first-person 3D exploration environment that can be continuously reconfigured to generate a range of tasks and task variants structured into complex evaluation curricula.
arXiv Detail & Related papers (2022-03-14T19:20:26Z) - Learning Compliance Adaptation in Contact-Rich Manipulation [81.40695846555955]
We propose a novel approach for learning predictive models of force profiles required for contact-rich tasks.
The approach combines an anomaly detection based on Bidirectional Gated Recurrent Units (Bi-GRU) and an adaptive force/impedance controller.
arXiv Detail & Related papers (2020-05-01T05:23:34Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.