Resilient robot teams: a review integrating decentralised control,
change-detection, and learning
- URL: http://arxiv.org/abs/2204.10063v1
- Date: Thu, 21 Apr 2022 12:51:27 GMT
- Title: Resilient robot teams: a review integrating decentralised control,
change-detection, and learning
- Authors: David M. Bossens, Sarvapali Ramchurn, Danesh Tarapore
- Abstract summary: This paper reviews opportunities and challenges for decentralised control, change-detection, and learning in the context of resilient robot teams.
Recent findings: Exogenous fault detection methods can provide a generic detection or a specific diagnosis with a recovery solution.
Resilient methods for decentralised control have been developed in learning perception-action-communication loops, multi-agent reinforcement learning, embodied evolution, offline evolution with online adaptation, explicit task allocation, and stigmergy in swarm robotics.
- Score: 10.312968200748116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose of review: This paper reviews opportunities and challenges for
decentralised control, change-detection, and learning in the context of
resilient robot teams.
Recent findings: Exogenous fault detection methods can provide a generic
detection or a specific diagnosis with a recovery solution. Robot teams can
perform active and distributed sensing for detecting changes in the
environment, including identifying and tracking dynamic anomalies, as well as
collaboratively mapping dynamic environments. Resilient methods for
decentralised control have been developed in learning
perception-action-communication loops, multi-agent reinforcement learning,
embodied evolution, offline evolution with online adaptation, explicit task
allocation, and stigmergy in swarm robotics.
Summary: Remaining challenges for resilient robot teams are integrating
change-detection and trial-and-error learning methods, obtaining reliable
performance evaluations under constrained evaluation time, improving the safety
of resilient robot teams, theoretical results demonstrating rapid adaptation to
given environmental perturbations, and designing realistic and compelling case
studies.
Related papers
- Network bottlenecks and task structure control the evolution of interpretable learning rules in a foraging agent [0.0]
We study meta-learning via evolutionary optimization of simple reward-modulated plasticity rules in embodied agents.
We show that unconstrained meta-learning leads to the emergence of diverse plasticity rules.
Our findings indicate that the meta-learning of plasticity rules is very sensitive to various parameters, with this sensitivity possibly reflected in the learning rules found in biological networks.
arXiv Detail & Related papers (2024-03-20T14:57:02Z) - Adaptive Control Strategy for Quadruped Robots in Actuator Degradation
Scenarios [16.148061952978246]
This paper introduces a teacher-student framework rooted in reinforcement learning, named Actuator Degradation Adaptation Transformer (ADAPT)
ADAPT produces a unified control strategy, enabling the robot to sustain its locomotion and perform tasks despite sudden joint actuator faults.
Empirical evaluations on the Unitree A1 platform validate the deployability and effectiveness of Adapt on real-world quadruped robots.
arXiv Detail & Related papers (2023-12-29T14:04:45Z) - Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics [11.946807588018595]
This paper presents a unified model-based reinforcement learning framework that bridges active exploration and uncertainty-aware deployment.
The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC.
We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
arXiv Detail & Related papers (2023-05-20T17:20:12Z) - Developing Decentralised Resilience to Malicious Influence in Collective
Perception Problem [0.7734726150561088]
In collective decision-making, designing algorithms that use only local information to effect swarm-level behaviour is a non-trivial problem.
We used machine learning techniques to teach swarm members to map their local perceptions of the environment to an optimal action.
We extended upon previous approaches by creating a curriculum that taught agents resilience to malicious influence.
arXiv Detail & Related papers (2022-11-06T08:53:33Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - L2Explorer: A Lifelong Reinforcement Learning Assessment Environment [49.40779372040652]
Reinforcement learning solutions tend to generalize poorly when exposed to new tasks outside of the data distribution they are trained on.
We introduce a framework for continual reinforcement-learning development and assessment using Lifelong Learning Explorer (L2Explorer)
L2Explorer is a new, Unity-based, first-person 3D exploration environment that can be continuously reconfigured to generate a range of tasks and task variants structured into complex evaluation curricula.
arXiv Detail & Related papers (2022-03-14T19:20:26Z) - Multi-Modal Anomaly Detection for Unstructured and Uncertain
Environments [5.677685109155077]
Modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision.
We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments.
Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.
arXiv Detail & Related papers (2020-12-15T21:59:58Z) - Learning Compliance Adaptation in Contact-Rich Manipulation [81.40695846555955]
We propose a novel approach for learning predictive models of force profiles required for contact-rich tasks.
The approach combines an anomaly detection based on Bidirectional Gated Recurrent Units (Bi-GRU) and an adaptive force/impedance controller.
arXiv Detail & Related papers (2020-05-01T05:23:34Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.