LiTelFuzz : Swarms Fuzzing Based on Linear Temporal Logic Constraints
- URL: http://arxiv.org/abs/2409.04736v1
- Date: Sat, 7 Sep 2024 06:46:23 GMT
- Title: LiTelFuzz : Swarms Fuzzing Based on Linear Temporal Logic Constraints
- Authors: Zhiwei Zhang, Ruoyu Zhou, Haocheng Han, Xiaodong Zhang, Yulong Shen,
- Abstract summary: We propose a formal verification method to discover logical flaws in multi-robot swarms.
Specifically, we abstract linear temporal logic constraints of the swarm and compute swarm robustness based on these constraints.
Based on this idea, we implement a single attack drone fuzzing scheme and a multiple attack drones scheme based on LiTelFuzz.
- Score: 16.59887508016901
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-robot swarms utilize swarm intelligence to collaborate on tasks and play an increasingly significant role in a variety of practical scenarios. However, due to the complex design, multi-robot swarm systems often have vulnerabilities caused by logical errors, which can severely disrupt the normal operations of multi-robot swarms. Despite the significant security threats that logical vulnerabilities pose to multi-robot swarms, there are still considerable challenges in testing and identifying these vulnerabilities, and related research still faces two major challenges: 1) the explosion of input space for testing, 2) the lack of effective test-guidance strategies. Therefore, in this paper, we overcome the two major challenges mentioned above, and propose a formal verification method to discover logical flaws in multi-robot swarms. Specifically, we abstract linear temporal logic constraints of the swarm and compute swarm robustness based on these constraints thus guiding fuzzing, we call this approach LiTelFuzz (Fuzzing based on Linear Temporal Logic Constraints). The core idea of LiTelFuzz is to design a metric based on behavioral constraints to assess the state of the multi-robot swarm at different moments, and guide fuzz testing based on the assessment results. Based on this idea, we overcome the two challenges of excessive test case input space and the lack of fuzzing guidance. Consequently, we implement a single attack drone fuzzing scheme and a multiple attack drones scheme based on LiTelFuzz. These are named SA-Fuzzing and MA-Fuzzing, respectively. Finally, we tested three popular swarm algorithms using LiTelFuzz with an average success rate of 87.35% for SA-Fuzzing and 91.73% for MA-Fuzzing to find vulnerabilities. The success rate and efficiency are better than the existing state-of-the-art fuzzer SWARMFLAWFINDER.
Related papers
- WILT: A Multi-Turn, Memorization-Robust Inductive Logic Benchmark for LLMs [0.8883751685905831]
We introduce the Wason Inductive Logic Test (WILT), a simple yet challenging multi-turn reasoning benchmark designed to resist memorization.
Our findings reveal that LLMs struggle with this task, exhibiting distinct strengths and weaknesses.
Despite these variations, the best-performing model achieves only 28% accuracy, highlighting a significant gap in LLM performance on complex multi-turn reasoning tasks.
arXiv Detail & Related papers (2024-10-14T18:29:13Z) - FFAA: Multimodal Large Language Model based Explainable Open-World Face Forgery Analysis Assistant [59.2438504610849]
We introduce FFAA: Face Forgery Analysis Assistant, consisting of a fine-tuned Multimodal Large Language Model (MLLM) and Multi-answer Intelligent Decision System (MIDS)
Our method not only provides user-friendly and explainable results but also significantly boosts accuracy and robustness compared to previous methods.
arXiv Detail & Related papers (2024-08-19T15:15:20Z) - High-Dimensional Fault Tolerance Testing of Highly Automated Vehicles Based on Low-Rank Models [39.139025989575686]
Fault Injection (FI) testing is conducted to evaluate the safety level of HAVs.
To fully cover test cases, various driving scenarios and fault settings should be considered.
We propose to accelerate FI testing under the low-rank Smoothness Regularized Matrix Factorization framework.
arXiv Detail & Related papers (2024-07-28T14:27:13Z) - Real-Time Anomaly Detection and Reactive Planning with Large Language Models [18.57162998677491]
Foundation models, e.g., large language models (LLMs), trained on internet-scale data possess zero-shot capabilities.
We present a two-stage reasoning framework that incorporates the judgement regarding potential anomalies into a safe control framework.
This enables our monitor to improve the trustworthiness of dynamic robotic systems, such as quadrotors or autonomous vehicles.
arXiv Detail & Related papers (2024-07-11T17:59:22Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Faith and Fate: Limits of Transformers on Compositionality [109.79516190693415]
We investigate the limits of transformer large language models across three representative compositional tasks.
These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer.
Our empirical findings suggest that transformer LLMs solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching.
arXiv Detail & Related papers (2023-05-29T23:24:14Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - DeFuzz: Deep Learning Guided Directed Fuzzing [41.61500799890691]
We propose a deep learning (DL) guided directed fuzzing for software vulnerability detection, named DeFuzz.
DeFuzz includes two main schemes: (1) we employ a pre-trained DL prediction model to identify the potentially vulnerable functions and the locations (i.e., vulnerable addresses)
Precisely, we employ Bidirectional-LSTM (BiLSTM) to identify attention words, and the vulnerabilities are associated with these attention words in functions.
arXiv Detail & Related papers (2020-10-23T03:44:03Z) - Active Fuzzing for Testing and Securing Cyber-Physical Systems [8.228859318969082]
We propose active fuzzing, an automatic approach for finding test suites of packet-level CPS network attacks.
Key to our solution is the use of online active learning, which iteratively updates the models by sampling payloads.
We evaluate the efficacy of active fuzzing by implementing it for a water purification plant testbed, finding it can automatically discover a test suite of flow, pressure, and over/underflow attacks.
arXiv Detail & Related papers (2020-05-28T16:19:50Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.