Unlearnable Examples For Time Series
- URL: http://arxiv.org/abs/2402.02028v1
- Date: Sat, 3 Feb 2024 04:48:47 GMT
- Title: Unlearnable Examples For Time Series
- Authors: Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
- Abstract summary: Unlearnable examples (UEs) refer to training samples modified to be unlearnable to Deep Neural Networks (DNNs)
We introduce the first UE generation method to protect time series data from unauthorized training by DNNs.
- Score: 33.83340340399046
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Unlearnable examples (UEs) refer to training samples modified to be
unlearnable to Deep Neural Networks (DNNs). These examples are usually
generated by adding error-minimizing noises that can fool a DNN model into
believing that there is nothing (no error) to learn from the data. The concept
of UE has been proposed as a countermeasure against unauthorized data
exploitation on personal data. While UE has been extensively studied on images,
it is unclear how to craft effective UEs for time series data. In this work, we
introduce the first UE generation method to protect time series data from
unauthorized training by deep learning models. To this end, we propose a new
form of error-minimizing noise that can be \emph{selectively} applied to
specific segments of time series, rendering them unlearnable to DNN models
while remaining imperceptible to human observers. Through extensive experiments
on a wide range of time series datasets, we demonstrate that the proposed UE
generation method is effective in both classification and generation tasks. It
can protect time series data against unauthorized exploitation, while
preserving their utility for legitimate usage, thereby contributing to the
development of secure and trustworthy machine learning systems.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - The Devil's Advocate: Shattering the Illusion of Unexploitable Data
using Diffusion Models [14.018862290487617]
We show that a carefully designed denoising process can counteract the data-protecting perturbations.
Our approach, called AVATAR, delivers state-of-the-art performance against a suite of recent availability attacks.
arXiv Detail & Related papers (2023-03-15T10:20:49Z) - Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples [128.25509832644025]
There is a growing interest in developing unlearnable examples (UEs) against visual privacy leaks on the Internet.
UEs are training samples added with invisible but unlearnable noise, which have been found can prevent unauthorized training of machine learning models.
We present a novel technique called Unlearnable Clusters (UCs) to generate label-agnostic unlearnable examples with cluster-wise perturbations.
arXiv Detail & Related papers (2022-12-31T04:26:25Z) - Data Isotopes for Data Provenance in DNNs [27.549744883427376]
We show how users can create special data points we call isotopes, which introduce "spurious features" into DNNs during training.
A user can apply statistical hypothesis testing to detect if a model has learned the spurious features associated with their isotopes by training on the user's data.
Our results confirm efficacy in multiple settings, detecting and distinguishing between hundreds of isotopes with high accuracy.
arXiv Detail & Related papers (2022-08-29T21:28:35Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks [28.502489028888608]
Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs.
In adversarial training, the unlearnability of error-minimizing noise will severely degrade.
We propose a novel model-free method, named emphOne-Pixel Shortcut, which only perturbs a single pixel of each image and makes the dataset unlearnable.
arXiv Detail & Related papers (2022-05-24T15:17:52Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - Unlearnable Examples: Making Personal Data Unexploitable [42.36793103856988]
Error-minimizing noise is intentionally generated to reduce the error of one or more of the training example(s) close to zero.
We empirically verify the effectiveness of error-minimizing noise in both sample-wise and class-wise forms.
arXiv Detail & Related papers (2021-01-13T06:15:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.