Wasserstein Adversarial Examples on Univariant Time Series Data
- URL: http://arxiv.org/abs/2303.12357v1
- Date: Wed, 22 Mar 2023 07:50:15 GMT
- Title: Wasserstein Adversarial Examples on Univariant Time Series Data
- Authors: Wenjie Wang, Li Xiong, Jian Lou
- Abstract summary: We propose adversarial examples in the Wasserstein space for time series data.
We use Wasserstein distance to bound the perturbation between normal examples and adversarial examples.
We empirically evaluate the proposed attack on several time series datasets in the healthcare domain.
- Score: 23.15675721397447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are crafted by adding indistinguishable perturbations to
normal examples in order to fool a well-trained deep learning model to
misclassify. In the context of computer vision, this notion of
indistinguishability is typically bounded by $L_{\infty}$ or other norms.
However, these norms are not appropriate for measuring indistinguishiability
for time series data. In this work, we propose adversarial examples in the
Wasserstein space for time series data for the first time and utilize
Wasserstein distance to bound the perturbation between normal examples and
adversarial examples. We introduce Wasserstein projected gradient descent
(WPGD), an adversarial attack method for perturbing univariant time series
data. We leverage the closed-form solution of Wasserstein distance in the 1D
space to calculate the projection step of WPGD efficiently with the gradient
descent method. We further propose a two-step projection so that the search of
adversarial examples in the Wasserstein space is guided and constrained by
Euclidean norms to yield more effective and imperceptible perturbations. We
empirically evaluate the proposed attack on several time series datasets in the
healthcare domain. Extensive results demonstrate that the Wasserstein attack is
powerful and can successfully attack most of the target classifiers with a high
attack success rate. To better study the nature of Wasserstein adversarial
example, we evaluate a strong defense mechanism named Wasserstein smoothing for
potential certified robustness defense. Although the defense can achieve some
accuracy gain, it still has limitations in many cases and leaves space for
developing a stronger certified robustness method to Wasserstein adversarial
examples on univariant time series data.
Related papers
- Private Wasserstein Distance with Random Noises [7.459793194754823]
We investigate the underlying triangular properties within the Wasserstein space, leading to a straightforward solution named TriangleWad.
TriangleWad is 20 times faster, making raw data information truly invisible, enhancing resilience against attacks, and without sacrificing estimation accuracy.
arXiv Detail & Related papers (2024-04-10T06:58:58Z) - Attacking Byzantine Robust Aggregation in High Dimensions [13.932039723114299]
Training modern neural networks or models typically requires averaging over a sample of high-dimensional vectors.
Poisoning attacks can skew or bias the average vectors used to train the model, forcing the model to learn specific patterns or avoid learning anything useful.
We show a new attack called HIDRA on practical realization of strong defenses which subverts their claim of dimension-independent bias.
arXiv Detail & Related papers (2023-12-22T06:25:46Z) - Temporal Robustness against Data Poisoning [69.01705108817785]
Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data.
We propose a temporal threat model of data poisoning with two novel metrics, earliness and duration, which respectively measure how long an attack started in advance and how long an attack lasted.
arXiv Detail & Related papers (2023-02-07T18:59:19Z) - Mutual Wasserstein Discrepancy Minimization for Sequential
Recommendation [82.0801585843835]
We propose a novel self-supervised learning framework based on Mutual WasserStein discrepancy minimization MStein for the sequential recommendation.
We also propose a novel contrastive learning loss based on Wasserstein Discrepancy Measurement.
arXiv Detail & Related papers (2023-01-28T13:38:48Z) - ADC: Adversarial attacks against object Detection that evade Context
consistency checks [55.8459119462263]
We show that even context consistency checks can be brittle to properly crafted adversarial examples.
We propose an adaptive framework to generate examples that subvert such defenses.
Our results suggest that how to robustly model context and check its consistency, is still an open problem.
arXiv Detail & Related papers (2021-10-24T00:25:09Z) - A Framework for Verification of Wasserstein Adversarial Robustness [0.6554326244334867]
Adding imperceptible noise to images can lead to severe misclassifications of the machine learning model.
We present a new Wasserstein adversarial attack that is projected gradient descent based.
arXiv Detail & Related papers (2021-10-13T15:59:44Z) - Two-sample Test using Projected Wasserstein Distance [18.46110328123008]
We develop a projected Wasserstein distance for the two-sample test, a fundamental problem in statistics and machine learning.
A key contribution is to couple optimal projection to find the low dimensional linear mapping to maximize the Wasserstein distance between projected probability distributions.
arXiv Detail & Related papers (2020-10-22T18:08:58Z) - Stronger and Faster Wasserstein Adversarial Attacks [25.54761631515683]
Deep models are vulnerable to "small, imperceptible" perturbations known as adversarial attacks.
We develop an exact yet efficient projection operator to enable a stronger projected gradient attack.
We also show that the Frank-Wolfe method equipped with a suitable linear minimization oracle works extremely fast under Wasserstein constraints.
arXiv Detail & Related papers (2020-08-06T21:36:12Z) - On Projection Robust Optimal Transport: Sample Complexity and Model
Misspecification [101.0377583883137]
Projection robust (PR) OT seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected.
Our first contribution is to establish several fundamental statistical properties of PR Wasserstein distances.
Next, we propose the integral PR Wasserstein (IPRW) distance as an alternative to the PRW distance, by averaging rather than optimizing on subspaces.
arXiv Detail & Related papers (2020-06-22T14:35:33Z) - Augmented Sliced Wasserstein Distances [55.028065567756066]
We propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs)
ASWDs are constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks.
Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems.
arXiv Detail & Related papers (2020-06-15T23:00:08Z) - Disentangled Representation Learning with Wasserstein Total Correlation [90.44329632061076]
We introduce Wasserstein total correlation in both variational autoencoder and Wasserstein autoencoder settings to learn disentangled latent representations.
A critic is adversarially trained along with the main objective to estimate the Wasserstein total correlation term.
We show that the proposed approach has comparable performances on disentanglement with smaller sacrifices in reconstruction abilities.
arXiv Detail & Related papers (2019-12-30T05:31:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.