Constrained Adversarial Learning and its applicability to Automated
Software Testing: a systematic review
- URL: http://arxiv.org/abs/2303.07546v1
- Date: Tue, 14 Mar 2023 00:27:33 GMT
- Title: Constrained Adversarial Learning and its applicability to Automated
Software Testing: a systematic review
- Authors: Jo\~ao Vitorino, Tiago Dias, Tiago Fonseca, Eva Maia, Isabel Pra\c{c}a
- Abstract summary: This systematic review is focused on the current state-of-the-art of constrained data generation methods applied for adversarial learning and software testing.
It aims to guide researchers and developers to enhance testing tools with adversarial learning methods and improve the resilience and robustness of their digital systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Every novel technology adds hidden vulnerabilities ready to be exploited by a
growing number of cyber-attacks. Automated software testing can be a promising
solution to quickly analyze thousands of lines of code by generating and
slightly modifying function-specific testing data to encounter a multitude of
vulnerabilities and attack vectors. This process draws similarities to the
constrained adversarial examples generated by adversarial learning methods, so
there could be significant benefits to the integration of these methods in
automated testing tools. Therefore, this systematic review is focused on the
current state-of-the-art of constrained data generation methods applied for
adversarial learning and software testing, aiming to guide researchers and
developers to enhance testing tools with adversarial learning methods and
improve the resilience and robustness of their digital systems. The found
constrained data generation applications for adversarial machine learning were
systematized, and the advantages and limitations of approaches specific for
software testing were thoroughly analyzed, identifying research gaps and
opportunities to improve testing tools with adversarial attack methods.
Related papers
- Towards new challenges of modern Pentest [0.0]
This study aims to present current methodologies, tools, and potential challenges applied to Pentest from an updated systematic literature review.
Also, it presents new challenges such as automation of techniques, management of costs associated with offensive security, and the difficulty in hiring qualified professionals to perform Pentest.
arXiv Detail & Related papers (2023-11-21T19:32:23Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Few-shot Weakly-supervised Cybersecurity Anomaly Detection [1.179179628317559]
We propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
This framework incorporates data augmentation, representation learning and ordinal regression.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2023-04-15T04:37:54Z) - Semantic Similarity-Based Clustering of Findings From Security Testing
Tools [1.6058099298620423]
In particular, it is common practice to use automated security testing tools that generate reports after inspecting a software artifact from multiple perspectives.
To identify these duplicate findings manually, a security expert has to invest resources like time, effort, and knowledge.
In this study, we investigated the potential of applying Natural Language Processing for clustering semantically similar security findings.
arXiv Detail & Related papers (2022-11-20T19:03:19Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Learn then Test: Calibrating Predictive Algorithms to Achieve Risk
Control [67.52000805944924]
Learn then Test (LTT) is a framework for calibrating machine learning models.
Our main insight is to reframe the risk-control problem as multiple hypothesis testing.
We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision.
arXiv Detail & Related papers (2021-10-03T17:42:03Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Interpreting Machine Learning Malware Detectors Which Leverage N-gram
Analysis [2.6397379133308214]
cybersecurity analysts always prefer solutions that are as interpretable and understandable as rule-based or signature-based detection.
The objective of this paper is to evaluate the current state-of-the-art ML models interpretability techniques when applied to ML-based malware detectors.
arXiv Detail & Related papers (2020-01-27T19:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.