RSFuzz: A Robustness-Guided Swarm Fuzzing Framework Based on Behavioral Constraints
- URL: http://arxiv.org/abs/2409.04736v2
- Date: Fri, 03 Oct 2025 12:14:56 GMT
- Title: RSFuzz: A Robustness-Guided Swarm Fuzzing Framework Based on Behavioral Constraints
- Authors: Ruoyu Zhou, Zhiwei Zhang, Haocheng Han, Xiaodong Zhang, Zehan Chen, Jun Sun, Yulong Shen, Dehai Xu,
- Abstract summary: RSFuzz is a robustness-guided swarm fuzzing framework designed to detect logical vulnerabilities in multi-robot systems.<n>We construct two swarm fuzzing schemes, Single Attacker Fuzzing (SA-Fuzzing) and Multiple Attacker Fuzzing (MA-Fuzzing)<n>Results show RSFuzz outperforms the state-of-the-art with an average improvement of 17.75% in effectiveness and a 38.4% increase in efficiency.
- Score: 19.659469020494022
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-robot swarms play an essential role in complex missions including battlefield reconnaissance, agricultural pest monitoring, as well as disaster search and rescue. Unfortunately, given the complexity of swarm algorithms, logical vulnerabilities are inevitable and often lead to severe safety and security consequences. Although various methods have been presented for detecting logical vulnerabilities through software testing, when they are used in swarm environments, these techniques face significant challenges: 1) Due to the swarm's vast composable parameter space, it is extremely difficult to generate failure-triggering scenarios, which is crucial to effectively expose logical vulnerabilities; 2) Because of the swarm's high flexibility and dynamism, it is challenging to model and evaluate the global swarm state, particularly in terms of cooperative behaviors, which makes it difficult to detect logical vulnerabilities. In this work, we propose RSFuzz, a robustness-guided swarm fuzzing framework designed to detect logical vulnerabilities in multi-robot systems. It leverages the robustness of behavioral constraints to quantitatively evaluate the swarm state and guide the generation of failure-triggering scenarios. In addition, RSFuzz identifies and targets key swarm nodes for perturbations, effectively reducing the input space. Upon the RSFuzz framework, we construct two swarm fuzzing schemes, Single Attacker Fuzzing (SA-Fuzzing) and Multiple Attacker Fuzzing (MA-Fuzzing), which employ single and multiple attackers, respectively, during fuzzing to disturb swarm mission execution. We evaluated RSFuzz's performance with three popular swarm algorithms in simulated environments. The results show that RSFuzz outperforms the state-of-the-art with an average improvement of 17.75\% in effectiveness and a 38.4\% increase in efficiency. We validated some vulnerabilities in real world.
Related papers
- EmoRAG: Evaluating RAG Robustness to Symbolic Perturbations [57.97838850473147]
Retrieval-Augmented Generation (RAG) systems are increasingly central to robust AI.<n>Our study unveils a critical, overlooked vulnerability: their susceptibility to subtle symbolic perturbations.<n>We demonstrate that injecting a single emoticon into a query makes it nearly 100% likely to retrieve semantically unrelated texts.
arXiv Detail & Related papers (2025-12-01T06:53:49Z) - When UAV Swarm Meets IRS: Collaborative Secure Communications in Low-altitude Wireless Networks [68.45202147860537]
Low-altitude wireless networks (LAWNs) provide enhanced coverage, reliability, and throughput for diverse applications.<n>These networks face significant security vulnerabilities from both known and potential unknown eavesdroppers.<n>We propose a novel secure communication framework for LAWNs where the selected UAVs within a swarm function as a virtual antenna array.
arXiv Detail & Related papers (2025-10-25T02:02:14Z) - DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models [50.21378052667732]
We conduct an in-depth analysis of dLLM vulnerabilities to jailbreak attacks across two distinct dimensions: intra-step and inter-step dynamics.<n>We propose DiffuGuard, a training-free defense framework that addresses vulnerabilities through a dual-stage approach.
arXiv Detail & Related papers (2025-09-29T05:17:10Z) - Agent4FaceForgery: Multi-Agent LLM Framework for Realistic Face Forgery Detection [108.5042835056188]
This work introduces Agent4FaceForgery to address two fundamental problems.<n>How to capture the diverse intents and iterative processes of human forgery creation.<n>How to model the complex, often adversarial, text-image interactions that accompany forgeries in social media.
arXiv Detail & Related papers (2025-09-16T01:05:01Z) - LLAMA: Multi-Feedback Smart Contract Fuzzing Framework with LLM-Guided Seed Generation [56.84049855266145]
We propose a Multi-feedback Smart Contract Fuzzing framework (LLAMA) that integrates evolutionary mutation strategies, and hybrid testing techniques.<n>LLAMA achieves 91% instruction coverage and 90% branch coverage, while detecting 132 out of 148 known vulnerabilities.<n>These results highlight LLAMA's effectiveness, adaptability, and practicality in real-world smart contract security testing scenarios.
arXiv Detail & Related papers (2025-07-16T09:46:58Z) - Contrastive-KAN: A Semi-Supervised Intrusion Detection Framework for Cybersecurity with scarce Labeled Data [0.0]
We propose a real-time intrusion detection system based on a semi-supervised contrastive learning framework.<n>Our method leverages abundant unlabeled data to effectively distinguish between normal and attack behaviors.<n> Experimental results show that our method outperforms existing contrastive learning-based approaches.
arXiv Detail & Related papers (2025-07-14T21:02:34Z) - Hybrid Approach to Directed Fuzzing [0.0]
We propose a hybrid approach to directed fuzzing with novel seed scheduling algorithm.<n>We implement our approach in Sydr-Fuzz tool using LibAFL-DiFuzz as directed fuzzer and Sydr as dynamic symbolic executor.
arXiv Detail & Related papers (2025-07-07T10:29:16Z) - Expert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection [38.083049237330826]
This study explores the use of Large Language Models (LLMs) in software vulnerability assessment by simulating the identification of Python code with known Common Weaknessions (CWEs)<n>Our results indicate that while zero-shot prompting performs poorly, few-shot prompting significantly enhances classification performance.<n> challenges such as model reliability, interpretability, and adversarial robustness remain critical areas for future research.
arXiv Detail & Related papers (2025-06-11T18:43:51Z) - Directed Greybox Fuzzing via Large Language Model [5.667013605202579]
HGFuzzer is an automatic framework that transforms path constraint problems into targeted code generation tasks.<n>We evaluate HGFuzzer on 20 real-world vulnerabilities, successfully triggering 17, including 11 within the first minute.<n>HGFuzzer discovered 9 previously unknown vulnerabilities, all of which were assigned CVE IDs.
arXiv Detail & Related papers (2025-05-06T11:04:07Z) - Runtime Anomaly Detection for Drones: An Integrated Rule-Mining and Unsupervised-Learning Approach [6.924083445159127]
UAVs depend on multiple sensor inputs, with faults potentially leading to physical instability and serious safety concerns.<n>Recent anomaly detection methods based on LSTM neural networks have shown promising results, but three challenges persist.<n>Motivated by these challenges, this paper introduces RADD, an integrated approach to anomaly detection in drones.
arXiv Detail & Related papers (2025-05-03T23:48:50Z) - Reasoning-Augmented Conversation for Multi-Turn Jailbreak Attacks on Large Language Models [53.580928907886324]
Reasoning-Augmented Conversation is a novel multi-turn jailbreak framework.
It reformulates harmful queries into benign reasoning tasks.
We show that RACE achieves state-of-the-art attack effectiveness in complex conversational scenarios.
arXiv Detail & Related papers (2025-02-16T09:27:44Z) - WILT: A Multi-Turn, Memorization-Robust Inductive Logic Benchmark for LLMs [0.8883751685905831]
We introduce the Wason Inductive Logic Test (WILT), a simple yet challenging multi-turn reasoning benchmark designed to resist memorization.
Our findings reveal that LLMs struggle with this task, exhibiting distinct strengths and weaknesses.
Despite these variations, the best-performing model achieves only 28% accuracy, highlighting a significant gap in LLM performance on complex multi-turn reasoning tasks.
arXiv Detail & Related papers (2024-10-14T18:29:13Z) - FFAA: Multimodal Large Language Model based Explainable Open-World Face Forgery Analysis Assistant [59.2438504610849]
We introduce FFAA: Face Forgery Analysis Assistant, consisting of a fine-tuned Multimodal Large Language Model (MLLM) and Multi-answer Intelligent Decision System (MIDS)
Our method not only provides user-friendly and explainable results but also significantly boosts accuracy and robustness compared to previous methods.
arXiv Detail & Related papers (2024-08-19T15:15:20Z) - High-Dimensional Fault Tolerance Testing of Highly Automated Vehicles Based on Low-Rank Models [39.139025989575686]
Fault Injection (FI) testing is conducted to evaluate the safety level of HAVs.
To fully cover test cases, various driving scenarios and fault settings should be considered.
We propose to accelerate FI testing under the low-rank Smoothness Regularized Matrix Factorization framework.
arXiv Detail & Related papers (2024-07-28T14:27:13Z) - Real-Time Anomaly Detection and Reactive Planning with Large Language Models [18.57162998677491]
Foundation models, e.g., large language models (LLMs), trained on internet-scale data possess zero-shot capabilities.
We present a two-stage reasoning framework that incorporates the judgement regarding potential anomalies into a safe control framework.
This enables our monitor to improve the trustworthiness of dynamic robotic systems, such as quadrotors or autonomous vehicles.
arXiv Detail & Related papers (2024-07-11T17:59:22Z) - Corpus Poisoning via Approximate Greedy Gradient Descent [48.5847914481222]
We propose Approximate Greedy Gradient Descent, a new attack on dense retrieval systems based on the widely used HotFlip method for generating adversarial passages.
We show that our method achieves a high attack success rate on several datasets and using several retrievers, and can generalize to unseen queries and new domains.
arXiv Detail & Related papers (2024-06-07T17:02:35Z) - Secure Hierarchical Federated Learning in Vehicular Networks Using Dynamic Client Selection and Anomaly Detection [10.177917426690701]
Hierarchical Federated Learning (HFL) faces the challenge of adversarial or unreliable vehicles in vehicular networks.
Our study introduces a novel framework that integrates dynamic vehicle selection and robust anomaly detection mechanisms.
Our proposed algorithm demonstrates remarkable resilience even under intense attack conditions.
arXiv Detail & Related papers (2024-05-25T18:31:20Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Faith and Fate: Limits of Transformers on Compositionality [109.79516190693415]
We investigate the limits of transformer large language models across three representative compositional tasks.
These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer.
Our empirical findings suggest that transformer LLMs solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching.
arXiv Detail & Related papers (2023-05-29T23:24:14Z) - Towards Efficient and Domain-Agnostic Evasion Attack with
High-dimensional Categorical Inputs [33.36532022853583]
Our work targets at searching feasible adversarial to attack a perturbation with high-dimensional categorical inputs in a domain-agnostic setting.
Our proposed method, namely FEAT, treats modifying each categorical feature as pulling an arm in multi-armed bandit programming.
Our work further hints the applicability of FEAT for assessing the adversarial vulnerability of classification systems with high-dimensional categorical inputs.
arXiv Detail & Related papers (2022-12-13T18:45:00Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - DeFuzz: Deep Learning Guided Directed Fuzzing [41.61500799890691]
We propose a deep learning (DL) guided directed fuzzing for software vulnerability detection, named DeFuzz.
DeFuzz includes two main schemes: (1) we employ a pre-trained DL prediction model to identify the potentially vulnerable functions and the locations (i.e., vulnerable addresses)
Precisely, we employ Bidirectional-LSTM (BiLSTM) to identify attention words, and the vulnerabilities are associated with these attention words in functions.
arXiv Detail & Related papers (2020-10-23T03:44:03Z) - Active Fuzzing for Testing and Securing Cyber-Physical Systems [8.228859318969082]
We propose active fuzzing, an automatic approach for finding test suites of packet-level CPS network attacks.
Key to our solution is the use of online active learning, which iteratively updates the models by sampling payloads.
We evaluate the efficacy of active fuzzing by implementing it for a water purification plant testbed, finding it can automatically discover a test suite of flow, pressure, and over/underflow attacks.
arXiv Detail & Related papers (2020-05-28T16:19:50Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.