AIM: Automated Input Set Minimization for Metamorphic Security Testing
- URL: http://arxiv.org/abs/2402.10773v2
- Date: Wed, 21 Feb 2024 18:35:27 GMT
- Title: AIM: Automated Input Set Minimization for Metamorphic Security Testing
- Authors: Nazanin Bayati Chaleshtari, Yoann Marquer, Fabrizio Pastore, and
Lionel C. Briand
- Abstract summary: We propose AIM, an approach that automatically selects inputs to reduce testing costs while preserving vulnerability detection capabilities.
AIM includes a clustering-based black box approach, to identify similar inputs based on their security properties.
It also relies on a novel genetic algorithm able to efficiently select diverse inputs while minimizing their total cost.
- Score: 9.232277700524786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although the security testing of Web systems can be automated by generating
crafted inputs, solutions to automate the test oracle, i.e., distinguishing
correct from incorrect outputs, remain preliminary. Specifically, previous work
has demonstrated the potential of metamorphic testing; indeed, security
failures can be determined by metamorphic relations that turn valid inputs into
malicious inputs. However, without further guidance, metamorphic relations are
typically executed on a large set of inputs, which is time-consuming and thus
makes metamorphic testing impractical. We propose AIM, an approach that
automatically selects inputs to reduce testing costs while preserving
vulnerability detection capabilities. AIM includes a clustering-based black box
approach, to identify similar inputs based on their security properties. It
also relies on a novel genetic algorithm able to efficiently select diverse
inputs while minimizing their total cost. Further, it contains a
problem-reduction component to reduce the search space and speed up the
minimization process. We evaluated the effectiveness of AIM on two well-known
Web systems, Jenkins and Joomla, with documented vulnerabilities. We compared
AIM's results with four baselines. Overall, AIM reduced metamorphic testing
time by 84% for Jenkins and 82% for Joomla, while preserving vulnerability
detection. Furthermore, AIM outperformed all the considered baselines regarding
vulnerability coverage.
Related papers
- PenHeal: A Two-Stage LLM Framework for Automated Pentesting and Optimal Remediation [18.432274815853116]
PenHeal is a two-stage LLM-based framework designed to autonomously identify and security vulnerabilities.
This paper introduces PenHeal, a two-stage LLM-based framework designed to autonomously identify and security vulnerabilities.
arXiv Detail & Related papers (2024-07-25T05:42:14Z) - MKF-ADS: Multi-Knowledge Fusion Based Self-supervised Anomaly Detection System for Control Area Network [9.305680247704542]
Control Area Network (CAN) is an essential communication protocol that interacts between Electronic Control Units (ECUs) in the vehicular network.
CAN is facing stringent security challenges due to innate security risks.
We propose a self-supervised multi-knowledge fused anomaly detection model, called MKF-ADS.
arXiv Detail & Related papers (2024-03-07T07:40:53Z) - Camouflage is all you need: Evaluating and Enhancing Language Model
Robustness Against Camouflage Adversarial Attacks [53.87300498478744]
Adversarial attacks represent a substantial challenge in Natural Language Processing (NLP)
This study undertakes a systematic exploration of this challenge in two distinct phases: vulnerability evaluation and resilience enhancement.
Results suggest a trade-off between performance and robustness, with some models maintaining similar performance while gaining robustness.
arXiv Detail & Related papers (2024-02-15T10:58:22Z) - Towards Reliable AI: Adequacy Metrics for Ensuring the Quality of
System-level Testing of Autonomous Vehicles [5.634825161148484]
We introduce a set of black-box test adequacy metrics called "Test suite Instance Space Adequacy" (TISA) metrics.
The TISA metrics offer a way to assess both the diversity and coverage of the test suite and the range of bugs detected during testing.
We evaluate the efficacy of the TISA metrics by examining their correlation with the number of bugs detected in system-level simulation testing of AVs.
arXiv Detail & Related papers (2023-11-14T10:16:05Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Getting pwn'd by AI: Penetration Testing with Large Language Models [0.0]
This paper explores the potential usage of large-language models, such as GPT3.5, to augment penetration testers with AI sparring partners.
We explore the feasibility of supplementing penetration testers with AI models for two distinct use cases: high-level task planning for security testing assignments and low-level vulnerability hunting within a vulnerable virtual machine.
arXiv Detail & Related papers (2023-07-24T19:59:22Z) - Evaluation of Parameter-based Attacks against Embedded Neural Networks
with Laser Injection [1.2499537119440245]
This work practically reports, for the first time, a successful variant of the Bit-Flip Attack, BFA, on a 32-bit Cortex-M microcontroller using laser fault injection.
To avoid unrealistic brute-force strategies, we show how simulations help selecting the most sensitive set of bits from the parameters taking into account the laser fault model.
arXiv Detail & Related papers (2023-04-25T14:48:58Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Trojaning Language Models for Fun and Profit [53.45727748224679]
TROJAN-LM is a new class of trojaning attacks in which maliciously crafted LMs trigger host NLP systems to malfunction.
By empirically studying three state-of-the-art LMs in a range of security-critical NLP tasks, we demonstrate that TROJAN-LM possesses the following properties.
arXiv Detail & Related papers (2020-08-01T18:22:38Z) - FCOS: A simple and strong anchor-free object detector [111.87691210818194]
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion.
Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes.
In contrast, our proposed detector FCOS is anchor box free, as well as proposal free.
arXiv Detail & Related papers (2020-06-14T01:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.