Explainable AI-Enhanced Supervisory Control for Robust Multi-Agent Robotic Systems
- URL: http://arxiv.org/abs/2509.15491v1
- Date: Thu, 18 Sep 2025 23:59:13 GMT
- Title: Explainable AI-Enhanced Supervisory Control for Robust Multi-Agent Robotic Systems
- Authors: Reza Pirayeshshirazinezhad, Nima Fathi,
- Abstract summary: We present an explainable AI-enhanced supervisory control framework for multi-agent robotics.<n>We validated the approach in two contrasting domains, spacecraft formation flying and autonomous underwater vehicles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an explainable AI-enhanced supervisory control framework for multi-agent robotics that combines (i) a timed-automata supervisor for safe, auditable mode switching, (ii) robust continuous control (Lyapunov-based controller for large-angle maneuver; sliding-mode controller (SMC) with boundary layers for precision and disturbance rejection), and (iii) an explainable predictor that maps mission context to gains and expected performance (energy, error). Monte Carlo-driven optimization provides the training data, enabling transparent real-time trade-offs. We validated the approach in two contrasting domains, spacecraft formation flying and autonomous underwater vehicles (AUVs). Despite different environments (gravity/actuator bias vs. hydrodynamic drag/currents), both share uncertain six degrees of freedom (6-DOF) rigid-body dynamics, relative motion, and tight tracking needs, making them representative of general robotic systems. In the space mission, the supervisory logic selects parameters that meet mission criteria. In AUV leader-follower tests, the same SMC structure maintains a fixed offset under stochastic currents with bounded steady error. In spacecraft validation, the SMC controller achieved submillimeter alignment with 21.7% lower tracking error and 81.4% lower energy consumption compared to Proportional-Derivative PD controller baselines. At the same time, in AUV tests, SMC maintained bounded errors under stochastic currents. These results highlight both the portability and the interpretability of the approach for safety-critical, resource-constrained multi-agent robotics.
Related papers
- STAR-RIS-assisted Collaborative Beamforming for Low-altitude Wireless Networks [58.13757830013997]
Wireless networks based on uncrewed aerial vehicles (UAVs) offer high mobility, flexibility, and coverage for urban communications.<n>They face severe signal attenuation in dense environments due to obstructions.<n>To address this critical issue, we consider introducing collaborative beam of UAVs and omni-directional re-altitude beamforming.
arXiv Detail & Related papers (2025-10-25T01:28:37Z) - ASTREA: Introducing Agentic Intelligence for Orbital Thermal Autonomy [51.56484100374058]
ASTREA is the first agentic system deployed on flight-heritage hardware (TRL 9) for autonomous spacecraft operations.<n>We integrate a resource-constrained Large Language Model (LLM) agent with a reinforcement learning controller in an asynchronous architecture tailored for space-qualified platforms.
arXiv Detail & Related papers (2025-09-16T08:52:13Z) - Decentralized Aerial Manipulation of a Cable-Suspended Load using Multi-Agent Reinforcement Learning [16.195474619148793]
This paper presents the first decentralized method to enable real-world 6-DoF manipulation of a cable-suspended load using a team of Micro-Aerial Vehicles (MAVs)<n>Our method leverages multi-agent reinforcement learning (MARL) to train an outer-loop control policy for each MAV.<n>We validate our method in various real-world experiments, including full-pose control under load model uncertainties.
arXiv Detail & Related papers (2025-08-02T23:52:33Z) - AI/ML Life Cycle Management for Interoperable AI Native RAN [50.61227317567369]
Artificial intelligence (AI) and machine learning (ML) models are rapidly permeating the 5G Radio Access Network (RAN)<n>These developments lay the foundation for AI-native transceivers as a key enabler for 6G.
arXiv Detail & Related papers (2025-07-24T16:04:59Z) - LLM Meets the Sky: Heuristic Multi-Agent Reinforcement Learning for Secure Heterogeneous UAV Networks [57.27815890269697]
This work focuses on maximizing the secrecy rate in heterogeneous UAV networks (HetUAVNs) under energy constraints.<n>We introduce a Large Language Model (LLM)-guided multi-agent learning approach.<n>Results show that our method outperforms existing baselines in secrecy and energy efficiency.
arXiv Detail & Related papers (2025-07-23T04:22:57Z) - Regulation Compliant AI for Fusion: Real-Time Image Analysis-Based Control of Divertor Detachment in Tokamaks [0.981937495272719]
This study implements and validates a real-time AI enabled linear and interpretable control system for successful divertor detachment control.<n>We demonstrate feedback divertor detachment control with a mean absolute difference of 2% from the target for both detachment and reattachment.
arXiv Detail & Related papers (2025-06-21T22:21:26Z) - Motion Control in Multi-Rotor Aerial Robots Using Deep Reinforcement Learning [0.0]
This paper investigates the application of Deep Reinforcement (DRL) Learning to address motion control challenges in drones for additive manufacturing (AM)<n>We propose a DRL framework that learns adaptable control policies for multi-rotor drones performing waypoint navigation in AM tasks.
arXiv Detail & Related papers (2025-02-09T19:00:16Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Towards Safe Control of Continuum Manipulator Using Shielded Multiagent
Reinforcement Learning [1.2647816797166165]
The control of the robot is formulated as a one-DoF, one agent problem in the MADQN framework to improve the learning efficiency.
Shielded MADQN enabled the robot to perform point and trajectory tracking with submillimeter root mean square errors under external loads.
arXiv Detail & Related papers (2021-06-15T05:55:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.