Constrained Adversarial Learning and its applicability to Automated
Software Testing: a systematic review
- URL: http://arxiv.org/abs/2303.07546v1
- Date: Tue, 14 Mar 2023 00:27:33 GMT
- Title: Constrained Adversarial Learning and its applicability to Automated
Software Testing: a systematic review
- Authors: Jo\~ao Vitorino, Tiago Dias, Tiago Fonseca, Eva Maia, Isabel Pra\c{c}a
- Abstract summary: This systematic review is focused on the current state-of-the-art of constrained data generation methods applied for adversarial learning and software testing.
It aims to guide researchers and developers to enhance testing tools with adversarial learning methods and improve the resilience and robustness of their digital systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Every novel technology adds hidden vulnerabilities ready to be exploited by a
growing number of cyber-attacks. Automated software testing can be a promising
solution to quickly analyze thousands of lines of code by generating and
slightly modifying function-specific testing data to encounter a multitude of
vulnerabilities and attack vectors. This process draws similarities to the
constrained adversarial examples generated by adversarial learning methods, so
there could be significant benefits to the integration of these methods in
automated testing tools. Therefore, this systematic review is focused on the
current state-of-the-art of constrained data generation methods applied for
adversarial learning and software testing, aiming to guide researchers and
developers to enhance testing tools with adversarial learning methods and
improve the resilience and robustness of their digital systems. The found
constrained data generation applications for adversarial machine learning were
systematized, and the advantages and limitations of approaches specific for
software testing were thoroughly analyzed, identifying research gaps and
opportunities to improve testing tools with adversarial attack methods.
Related papers
- AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems [26.605694684145313]
In this study, we design and implement a testing tool, tool, to comprehensively and effectively evaluate AI systems.
The tool extensively assesses adversarial robustness, model interpretability, and performs neuron analysis.
Our research sheds light on a general solution for AI systems testing landscape.
arXiv Detail & Related papers (2024-11-09T11:15:17Z) - Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - Towards new challenges of modern Pentest [0.0]
This study aims to present current methodologies, tools, and potential challenges applied to Pentest from an updated systematic literature review.
Also, it presents new challenges such as automation of techniques, management of costs associated with offensive security, and the difficulty in hiring qualified professionals to perform Pentest.
arXiv Detail & Related papers (2023-11-21T19:32:23Z) - Semantic Similarity-Based Clustering of Findings From Security Testing
Tools [1.6058099298620423]
In particular, it is common practice to use automated security testing tools that generate reports after inspecting a software artifact from multiple perspectives.
To identify these duplicate findings manually, a security expert has to invest resources like time, effort, and knowledge.
In this study, we investigated the potential of applying Natural Language Processing for clustering semantically similar security findings.
arXiv Detail & Related papers (2022-11-20T19:03:19Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Learn then Test: Calibrating Predictive Algorithms to Achieve Risk
Control [67.52000805944924]
Learn then Test (LTT) is a framework for calibrating machine learning models.
Our main insight is to reframe the risk-control problem as multiple hypothesis testing.
We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision.
arXiv Detail & Related papers (2021-10-03T17:42:03Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Interpreting Machine Learning Malware Detectors Which Leverage N-gram
Analysis [2.6397379133308214]
cybersecurity analysts always prefer solutions that are as interpretable and understandable as rule-based or signature-based detection.
The objective of this paper is to evaluate the current state-of-the-art ML models interpretability techniques when applied to ML-based malware detectors.
arXiv Detail & Related papers (2020-01-27T19:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.