Efficient Availability Attacks against Supervised and Contrastive
Learning Simultaneously
- URL: http://arxiv.org/abs/2402.04010v1
- Date: Tue, 6 Feb 2024 14:05:05 GMT
- Title: Efficient Availability Attacks against Supervised and Contrastive
Learning Simultaneously
- Authors: Yihan Wang and Yifan Zhu and Xiao-Shan Gao
- Abstract summary: We propose contrastive-like data augmentations in supervised error minimization or frameworks to obtain attacks effective for both SL and CL.
Our proposed AUE and AAP attacks achieve state-of-the-art worst-case unlearnability across SL and CL algorithms with less consumption, showcasing prospects in real-world applications.
- Score: 26.018467038778006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Availability attacks can prevent the unauthorized use of private data and
commercial datasets by generating imperceptible noise and making unlearnable
examples before release. Ideally, the obtained unlearnability prevents
algorithms from training usable models. When supervised learning (SL)
algorithms have failed, a malicious data collector possibly resorts to
contrastive learning (CL) algorithms to bypass the protection. Through
evaluation, we have found that most of the existing methods are unable to
achieve both supervised and contrastive unlearnability, which poses risks to
data protection. Different from recent methods based on contrastive error
minimization, we employ contrastive-like data augmentations in supervised error
minimization or maximization frameworks to obtain attacks effective for both SL
and CL. Our proposed AUE and AAP attacks achieve state-of-the-art worst-case
unlearnability across SL and CL algorithms with less computation consumption,
showcasing prospects in real-world applications.
Related papers
- Exploiting the Data Gap: Utilizing Non-ignorable Missingness to Manipulate Model Learning [13.797822374912773]
Adversarial Missingness (AM) attacks are motivated by maliciously engineering non-ignorable missingness mechanisms.
In this work we focus on associational learning in the context of AM attacks.
We formulate the learning of the adversarial missingness mechanism as a bi-level optimization.
arXiv Detail & Related papers (2024-09-06T17:10:28Z) - Nonlinear Transformations Against Unlearnable Datasets [4.876873339297269]
Automated scraping stands out as a common method for collecting data in deep learning models without the authorization of data owners.
Recent studies have begun to tackle the privacy concerns associated with this data collection method.
The data generated by those approaches, called "unlearnable" examples, are prevented "learning" by deep learning models.
arXiv Detail & Related papers (2024-06-05T03:00:47Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Provably Unlearnable Data Examples [27.24152626809928]
Efforts have been undertaken to render shared data unlearnable for unauthorized models in the wild.
We propose a mechanism for certifying the so-called $(q, eta)$-Learnability of an unlearnable dataset.
A lower certified $(q, eta)$-Learnability indicates a more robust and effective protection over the dataset.
arXiv Detail & Related papers (2024-05-06T09:48:47Z) - Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning [69.81438976273866]
Open-set semi-supervised learning (Open-set SSL) considers a more practical scenario, where unlabeled data and test data contain new categories (outliers) not observed in labeled data (inliers)
We introduce evidential deep learning (EDL) as an outlier detector to quantify different types of uncertainty, and design different uncertainty metrics for self-training and inference.
We propose a novel adaptive negative optimization strategy, making EDL more tailored to the unlabeled dataset containing both inliers and outliers.
arXiv Detail & Related papers (2023-03-21T09:07:15Z) - Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples [128.25509832644025]
There is a growing interest in developing unlearnable examples (UEs) against visual privacy leaks on the Internet.
UEs are training samples added with invisible but unlearnable noise, which have been found can prevent unauthorized training of machine learning models.
We present a novel technique called Unlearnable Clusters (UCs) to generate label-agnostic unlearnable examples with cluster-wise perturbations.
arXiv Detail & Related papers (2022-12-31T04:26:25Z) - Effective Targeted Attacks for Adversarial Self-Supervised Learning [58.14233572578723]
unsupervised adversarial training (AT) has been highlighted as a means of achieving robustness in models without any label information.
We propose a novel positive mining for targeted adversarial attack to generate effective adversaries for adversarial SSL frameworks.
Our method demonstrates significant enhancements in robustness when applied to non-contrastive SSL frameworks, and less but consistent robustness improvements with contrastive SSL frameworks.
arXiv Detail & Related papers (2022-10-19T11:43:39Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.