Discovering Imperfectly Observable Adversarial Actions using Anomaly
Detection
- URL: http://arxiv.org/abs/2004.10638v1
- Date: Wed, 22 Apr 2020 15:31:53 GMT
- Title: Discovering Imperfectly Observable Adversarial Actions using Anomaly
Detection
- Authors: Olga Petrova, Karel Durkota, Galina Alperovich, Karel Horak, Michal
Najman, Branislav Bosansky, Viliam Lisy
- Abstract summary: Anomaly detection is a method for discovering unusual and suspicious behavior.
We propose two algorithms for solving such games.
Experiments show that both algorithms are applicable for cases with low feature space dimensions.
- Score: 0.24244694855867271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly detection is a method for discovering unusual and suspicious
behavior. In many real-world scenarios, the examined events can be directly
linked to the actions of an adversary, such as attacks on computer networks or
frauds in financial operations. While the defender wants to discover such
malicious behavior, the attacker seeks to accomplish their goal (e.g.,
exfiltrating data) while avoiding the detection. To this end, anomaly detectors
have been used in a game-theoretic framework that captures these goals of a
two-player competition. We extend the existing models to more realistic
settings by (1) allowing both players to have continuous action spaces and by
assuming that (2) the defender cannot perfectly observe the action of the
attacker. We propose two algorithms for solving such games -- a direct
extension of existing algorithms based on discretizing the feature space and
linear programming and the second algorithm based on constrained learning.
Experiments show that both algorithms are applicable for cases with low feature
space dimensions but the learning-based method produces less exploitable
strategies and it is scalable to higher dimensions. Moreover, we use real-world
data to compare our approaches with existing classifiers in a data-exfiltration
scenario via the DNS channel. The results show that our models are
significantly less exploitable by an informed attacker.
Related papers
- Interactive Trimming against Evasive Online Data Manipulation Attacks: A Game-Theoretic Approach [10.822843258077997]
Malicious data poisoning attacks can disrupt machine learning processes and lead to severe consequences.
To mitigate these attacks, distance-based defenses, such as trimming, have been proposed.
We present an interactive game-theoretical model to defend online data manipulation attacks using the trimming strategy.
arXiv Detail & Related papers (2024-03-15T13:59:05Z) - An Adversarial Approach to Evaluating the Robustness of Event Identification Models [12.862865254507179]
This paper considers a physics-based modal decomposition method to extract features for event classification.
The resulting classifiers are tested against an adversarial algorithm to evaluate their robustness.
arXiv Detail & Related papers (2024-02-19T18:11:37Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Semantic Novelty Detection via Relational Reasoning [17.660958043781154]
We propose a novel representation learning paradigm based on relational reasoning.
Our experiments show that this knowledge is directly transferable to a wide range of scenarios.
It can be exploited as a plug-and-play module to convert closed-set recognition models into reliable open-set ones.
arXiv Detail & Related papers (2022-07-18T15:49:27Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Automated Decision-based Adversarial Attacks [48.01183253407982]
We consider the practical and challenging decision-based black-box adversarial setting.
Under this setting, the attacker can only acquire the final classification labels by querying the target model.
We propose to automatically discover decision-based adversarial attack algorithms.
arXiv Detail & Related papers (2021-05-09T13:15:10Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - An Analysis of Robustness of Non-Lipschitz Networks [35.64511156980701]
Small input perturbations can often produce large movements in the network's final-layer feature space.
In our model, the adversary may move data an arbitrary distance in feature space but only in random low-dimensional subspaces.
We provide theoretical guarantees for setting algorithm parameters to optimize over accuracy-abstention trade-offs using data-driven methods.
arXiv Detail & Related papers (2020-10-13T03:56:39Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.