Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift
- URL: http://arxiv.org/abs/2001.11821v1
- Date: Mon, 13 Jan 2020 13:54:36 GMT
- Title: Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift
- Authors: Alexandre Dey, Marc Velay, Jean-Philippe Fauvelle, Sylvain Navers
- Abstract summary: Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in Artificial Intelligence (AI) have brought new
capabilities to behavioural analysis (UEBA) for cyber-security consisting in
the detection of hostile action based on the unusual nature of events observed
on the Information System.In our previous work (presented at C\&ESAR 2018 and
FIC 2019), we have associated deep neural networks auto-encoders for anomaly
detection and graph-based events correlation to address major limitations in
UEBA systems. This resulted in reduced false positive and false negative rates,
improved alert explainability, while maintaining real-time performances and
scalability. However, we did not address the natural evolution of behaviours
through time, also known as concept drift. To maintain effective detection
capabilities, an anomaly-based detection system must be continually trained,
which opens a door to an adversary that can conduct the so-called
"frog-boiling" attack by progressively distilling unnoticed attack traces
inside the behavioural models until the complete attack is considered normal.
In this paper, we present a solution to effectively mitigate this attack by
improving the detection process and efficiently leveraging human expertise. We
also present preliminary work on adversarial AI conducting deception attack,
which, in term, will be used to help assess and improve the defense system.
These defensive and offensive AI implement joint, continual and active
learning, in a step that is necessary in assessing, validating and certifying
AI-based defensive solutions.
Related papers
- Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning [26.403625710805418]
Advanced Persistent Threats (APTs) represent sophisticated cyberattacks characterized by their ability to remain undetected for extended periods.
We propose Slot, an advanced APT detection approach based on provenance graphs and graph reinforcement learning.
We show Slot's outstanding accuracy, efficiency, adaptability, and robustness in APT detection, with most metrics surpassing state-of-the-art methods.
arXiv Detail & Related papers (2024-10-23T14:28:32Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - A Robust Likelihood Model for Novelty Detection [8.766411351797883]
Current approaches to novelty or anomaly detection are based on deep neural networks.
We propose a new prior that aims at learning a robust likelihood for the novelty test, as a defense against attacks.
We also integrate the same prior with a state-of-the-art novelty detection approach.
arXiv Detail & Related papers (2023-06-06T01:02:31Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - An Overview of Backdoor Attacks Against Deep Neural Networks and
Possible Defences [33.415612094924654]
The goal of this paper is to review the different types of attacks and defences proposed so far.
In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time.
Test time errors are activated only in the presence of a triggering event corresponding to a properly crafted input sample.
arXiv Detail & Related papers (2021-11-16T13:06:31Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition [133.35968094967626]
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
arXiv Detail & Related papers (2020-05-14T17:12:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.