Sensitive Samples Revisited: Detecting Neural Network Attacks Using
Constraint Solvers
- URL: http://arxiv.org/abs/2109.03966v1
- Date: Tue, 7 Sep 2021 01:34:02 GMT
- Title: Sensitive Samples Revisited: Detecting Neural Network Attacks Using
Constraint Solvers
- Authors: Amel Nestor Docena (Northeastern University), Thomas Wahl
(Northeastern University), Trevor Pearce (Northeastern University), Yunsi Fei
(Northeastern University)
- Abstract summary: Neural Networks are used in numerous security- and safety-relevant domains.
They are a popular target of attacks that subvert their classification capabilities.
In this paper we offer an alternative, using symbolic constraint solvers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Networks are used today in numerous security- and safety-relevant
domains and are, as such, a popular target of attacks that subvert their
classification capabilities, by manipulating the network parameters. Prior work
has introduced sensitive samples -- inputs highly sensitive to parameter
changes -- to detect such manipulations, and proposed a gradient ascent-based
approach to compute them. In this paper we offer an alternative, using symbolic
constraint solvers. We model the network and a formal specification of a
sensitive sample in the language of the solver and ask for a solution. This
approach supports a rich class of queries, corresponding, for instance, to the
presence of certain types of attacks. Unlike earlier techniques, our approach
does not depend on convex search domains, or on the suitability of a starting
point for the search. We address the performance limitations of constraint
solvers by partitioning the search space for the solver, and exploring the
partitions according to a balanced schedule that still retains completeness of
the search. We demonstrate the impact of the use of solvers in terms of
functionality and search efficiency, using a case study for the detection of
Trojan attacks on Neural Networks.
Related papers
- Feature Selection for Network Intrusion Detection [3.7414804164475983]
We present a novel information-theoretic method that facilitates the exclusion of non-informative features when detecting network intrusions.
The proposed method is based on function approximation using a neural network, which enables a version of our approach that incorporates a recurrent layer.
arXiv Detail & Related papers (2024-11-18T14:25:55Z) - Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - Few-Shot Anomaly Detection with Adversarial Loss for Robust Feature
Representations [8.915958745269442]
Anomaly detection is a critical and challenging task that aims to identify data points deviating from normal patterns and distributions within a dataset.
Various methods have been proposed using a one-class-one-model approach, but these techniques often face practical problems such as memory inefficiency and the requirement of sufficient data for training.
We propose a few-shot anomaly detection method that integrates adversarial training loss to obtain more robust and generalized feature representations.
arXiv Detail & Related papers (2023-12-04T09:45:02Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - Incorporating domain knowledge into neural-guided search [3.1542695050861544]
AutoML problems involve optimizing discrete objects under a black-box reward.
Neural-guided search provides a flexible means of searching these spaces using an autoregressive recurrent neural network.
We formalize a framework for incorporating such in situ priors and constraints into neural-guided search.
arXiv Detail & Related papers (2021-07-19T22:34:43Z) - Multi-Source Domain Adaptation for Object Detection [52.87890831055648]
We propose a unified Faster R-CNN based framework, termed Divide-and-Merge Spindle Network (DMSN)
DMSN can simultaneously enhance domain innative and preserve discriminative power.
We develop a novel pseudo learning algorithm to approximate optimal parameters of pseudo target subset.
arXiv Detail & Related papers (2021-06-30T03:17:20Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.