The Dilemma Between Data Transformations and Adversarial Robustness for
Time Series Application Systems
- URL: http://arxiv.org/abs/2006.10885v2
- Date: Thu, 9 Dec 2021 22:37:44 GMT
- Title: The Dilemma Between Data Transformations and Adversarial Robustness for
Time Series Application Systems
- Authors: Sheila Alemany, Niki Pissinou
- Abstract summary: Adrial examples, or nearly indistinguishable inputs created by an attacker, significantly reduce machine learning accuracy.
This work explores how data transformations may impact an adversary's ability to create effective adversarial samples on a recurrent neural network.
A data transformation technique reduces the vulnerability to adversarial examples only if it approximates the dataset's intrinsic dimension.
- Score: 1.2056495277232115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples, or nearly indistinguishable inputs created by an
attacker, significantly reduce machine learning accuracy. Theoretical evidence
has shown that the high intrinsic dimensionality of datasets facilitates an
adversary's ability to develop effective adversarial examples in classification
models. Adjacently, the presentation of data to a learning model impacts its
performance. For example, we have seen this through dimensionality reduction
techniques used to aid with the generalization of features in machine learning
applications. Thus, data transformation techniques go hand-in-hand with
state-of-the-art learning models in decision-making applications such as
intelligent medical or military systems. With this work, we explore how data
transformations techniques such as feature selection, dimensionality reduction,
or trend extraction techniques may impact an adversary's ability to create
effective adversarial samples on a recurrent neural network. Specifically, we
analyze it from the perspective of the data manifold and the presentation of
its intrinsic features. Our evaluation empirically shows that feature selection
and trend extraction techniques may increase the RNN's vulnerability. A data
transformation technique reduces the vulnerability to adversarial examples only
if it approximates the dataset's intrinsic dimension, minimizes codimension,
and maintains higher manifold coverage.
Related papers
- Exploiting the Data Gap: Utilizing Non-ignorable Missingness to Manipulate Model Learning [13.797822374912773]
Adversarial Missingness (AM) attacks are motivated by maliciously engineering non-ignorable missingness mechanisms.
In this work we focus on associational learning in the context of AM attacks.
We formulate the learning of the adversarial missingness mechanism as a bi-level optimization.
arXiv Detail & Related papers (2024-09-06T17:10:28Z) - Machine unlearning through fine-grained model parameters perturbation [26.653596302257057]
We propose fine-grained Top-K and Random-k parameters perturbed inexact machine unlearning strategies.
We also tackle the challenge of evaluating the effectiveness of machine unlearning.
arXiv Detail & Related papers (2024-01-09T07:14:45Z) - Uncovering the Hidden Cost of Model Compression [43.62624133952414]
Visual Prompting has emerged as a pivotal method for transfer learning in computer vision.
Model compression detrimentally impacts the performance of visual prompting-based transfer.
However, negative effects on calibration are not present when models are compressed via quantization.
arXiv Detail & Related papers (2023-08-29T01:47:49Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.