Enhancing Robustness of LLM-Driven Multi-Agent Systems through Randomized Smoothing
- URL: http://arxiv.org/abs/2507.04105v1
- Date: Sat, 05 Jul 2025 17:26:08 GMT
- Title: Enhancing Robustness of LLM-Driven Multi-Agent Systems through Randomized Smoothing
- Authors: Jinwei Hu, Yi Dong, Zhengtao Ding, Xiaowei Huang,
- Abstract summary: This paper presents a framework for enhancing the safety of large language model (LLM) empowered multi-agent systems (MAS) in safety-critical domains such as aerospace.<n>We apply randomized smoothing, a statistical robustness certification technique, to the MAS consensus context, enabling probabilistic guarantees on agent decisions under adversarial influence.
- Score: 13.997409139696556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a defense framework for enhancing the safety of large language model (LLM) empowered multi-agent systems (MAS) in safety-critical domains such as aerospace. We apply randomized smoothing, a statistical robustness certification technique, to the MAS consensus context, enabling probabilistic guarantees on agent decisions under adversarial influence. Unlike traditional verification methods, our approach operates in black-box settings and employs a two-stage adaptive sampling mechanism to balance robustness and computational efficiency. Simulation results demonstrate that our method effectively prevents the propagation of adversarial behaviors and hallucinations while maintaining consensus performance. This work provides a practical and scalable path toward safe deployment of LLM-based MAS in real-world, high-stakes environments.
Related papers
- Efficient Solution and Learning of Robust Factored MDPs [57.2416302384766]
Learning r-MDPs from interactions with an unknown environment enables the synthesis of robust policies with provable guarantees on performance.<n>We propose novel methods for solving and learning r-MDPs based on factored state representations.
arXiv Detail & Related papers (2025-08-01T15:23:15Z) - PRM-Free Security Alignment of Large Models via Red Teaming and Adversarial Training [0.5439020425819]
Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse applications, yet they pose significant security risks that threaten their safe deployment in critical domains.<n>This paper presents a novel PRM-free security alignment framework that leverages automated red teaming and adversarial training to achieve robust security guarantees while maintaining computational efficiency.
arXiv Detail & Related papers (2025-07-14T17:41:12Z) - ICLShield: Exploring and Mitigating In-Context Learning Backdoor Attacks [61.06621533874629]
In-context learning (ICL) has demonstrated remarkable success in large language models (LLMs)<n>In this paper, we propose, for the first time, the dual-learning hypothesis, which posits that LLMs simultaneously learn both the task-relevant latent concepts and backdoor latent concepts.<n>Motivated by these findings, we propose ICLShield, a defense mechanism that dynamically adjusts the concept preference ratio.
arXiv Detail & Related papers (2025-07-02T03:09:20Z) - Learning Efficient Flocking Control based on Gibbs Random Fields [8.715391538937707]
Multi-agent reinforcement learning framework built on Gibbs Random Fields (GRFs)<n>An action attention module is introduced to implicitly anticipate the motion intentions of neighboring robots.<n>Proposed framework enables learning an efficient distributed control policy for multi-robot systems in challenging environments with success rate around $99%$.
arXiv Detail & Related papers (2025-02-05T08:27:58Z) - Scalable Safe Multi-Agent Reinforcement Learning for Multi-Agent System [1.0124625066746598]
Existing Multi-Agent Reinforcement Learning (MARL) algorithms that rely solely on reward shaping are ineffective in ensuring safety.<n>We propose a novel framework, Scalable Safe MARL (SS-MARL), to enhance the safety and scalability of MARL methods.<n>We show that SS-MARL achieves a better trade-off between optimality and safety compared to baselines, and its scalability significantly outperforms the latest methods in scenarios with a large number of agents.
arXiv Detail & Related papers (2025-01-23T15:01:19Z) - Self-Supervised Inference of Agents in Trustless Environments [44.99833362998488]
We propose a novel approach where agents can form swarms to produce high-quality responses effectively.
This is accomplished by utilizing agents capable of data inference and ranking.
We show that our approach is an order of magnitude faster than other trustless inference strategies reaching less than 125 ms validation latency.
arXiv Detail & Related papers (2024-09-12T20:32:07Z) - COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against
Semantic Attacks [24.37030085306459]
We propose the first robustness certification framework COMMIT certify robustness of multi-sensor fusion systems against semantic attacks.
In particular, we propose a practical anisotropic noise mechanism that leverages randomized smoothing with multi-modal data.
We show that the certification for MSF models is at most 48.39% higher than that of single-modal models, which validates the advantages of MSF models.
arXiv Detail & Related papers (2024-03-04T18:57:11Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Constrained Policy Optimization via Bayesian World Models [79.0077602277004]
LAMBDA is a model-based approach for policy optimization in safety critical tasks modeled via constrained Markov decision processes.
We demonstrate LAMBDA's state of the art performance on the Safety-Gym benchmark suite in terms of sample efficiency and constraint violation.
arXiv Detail & Related papers (2022-01-24T17:02:22Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.