Phantom: Untargeted Poisoning Attacks on Semi-Supervised Learning (Full Version)
- URL: http://arxiv.org/abs/2409.01470v1
- Date: Mon, 2 Sep 2024 21:29:05 GMT
- Title: Phantom: Untargeted Poisoning Attacks on Semi-Supervised Learning (Full Version)
- Authors: Jonathan Knauer, Phillip Rieger, Hossein Fereidooni, Ahmad-Reza Sadeghi,
- Abstract summary: We introduce Phantom, the first untargeted poisoning attack in Semi-Supervised Learning (SSL)
Our approach only requires adding few manipulated samples, such as posting images on social networks, without the need to control the victim.
Our findings demonstrate the threat of poisoning user-generated content platforms, rendering them unsuitable for SSL in specific tasks.
- Score: 19.99079192245235
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Neural Networks (DNNs) can handle increasingly complex tasks, albeit they require rapidly expanding training datasets. Collecting data from platforms with user-generated content, such as social networks, has significantly eased the acquisition of large datasets for training DNNs. Despite these advancements, the manual labeling process remains a substantial challenge in terms of both time and cost. In response, Semi-Supervised Learning (SSL) approaches have emerged, where only a small fraction of the dataset needs to be labeled, leaving the majority unlabeled. However, leveraging data from untrusted sources like social networks also creates new security risks, as potential attackers can easily inject manipulated samples. Previous research on the security of SSL primarily focused on injecting backdoors into trained models, while less attention was given to the more challenging untargeted poisoning attacks. In this paper, we introduce Phantom, the first untargeted poisoning attack in SSL that disrupts the training process by injecting a small number of manipulated images into the unlabeled dataset. Unlike existing attacks, our approach only requires adding few manipulated samples, such as posting images on social networks, without the need to control the victim. Phantom causes SSL algorithms to overlook the actual images' pixels and to rely only on maliciously crafted patterns that \ourname superimposed on the real images. We show Phantom's effectiveness for 6 different datasets and 3 real-world social-media platforms (Facebook, Instagram, Pinterest). Already small fractions of manipulated samples (e.g., 5\%) reduce the accuracy of the resulting model by 10\%, with higher percentages leading to a performance comparable to a naive classifier. Our findings demonstrate the threat of poisoning user-generated content platforms, rendering them unsuitable for SSL in specific tasks.
Related papers
- How to Craft Backdoors with Unlabeled Data Alone? [54.47006163160948]
Self-supervised learning (SSL) can learn rich features in an economical and scalable way.
If the released dataset is maliciously poisoned, backdoored SSL models can behave badly when triggers are injected to test samples.
We propose two strategies for poison selection: clustering-based selection using pseudolabels, and contrastive selection derived from the mutual information principle.
arXiv Detail & Related papers (2024-04-10T02:54:18Z) - Self-supervised learning for skin cancer diagnosis with limited training data [0.196629787330046]
Self-supervised learning (SSL) is an alternative to the standard supervised pre-training on ImageNet for scenarios with limited training data.
We consider textitfurther SSL pre-training on task-specific datasets, where our implementation is motivated by supervised transfer learning.
We find minimal further SSL pre-training on task-specific data can be as effective as large-scale SSL pre-training on ImageNet for medical image classification tasks with limited labelled data.
arXiv Detail & Related papers (2024-01-01T08:11:38Z) - Erasing Self-Supervised Learning Backdoor by Cluster Activation Masking [65.44477004525231]
Researchers have recently found that Self-Supervised Learning (SSL) is vulnerable to backdoor attacks.
In this paper, we propose to erase the SSL backdoor by cluster activation masking and propose a novel PoisonCAM method.
Our method achieves 96% accuracy for backdoor trigger detection compared to 3% of the state-of-the-art method on poisoned ImageNet-100.
arXiv Detail & Related papers (2023-12-13T08:01:15Z) - Microbial Genetic Algorithm-based Black-box Attack against Interpretable
Deep Learning Systems [16.13790238416691]
In white-box environments, interpretable deep learning systems (IDLSes) have been shown to be vulnerable to malicious manipulations.
We propose a Query-efficient Score-based black-box attack against IDLSes, QuScore, which requires no knowledge of the target model and its coupled interpretation model.
arXiv Detail & Related papers (2023-07-13T00:08:52Z) - ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders
with One Target Unlabelled Sample [16.460288815336902]
ESTAS achieves > 99% attacks success rate (ASR) with one target-class sample.
Compared to prior works, ESTAS attains > 30% ASR increase and > 8.3% accuracy improvement on average.
arXiv Detail & Related papers (2022-11-20T08:58:34Z) - DATA: Domain-Aware and Task-Aware Pre-training [94.62676913928831]
We present DATA, a simple yet effective NAS approach specialized for self-supervised learning (SSL)
Our method achieves promising results across a wide range of computation costs on downstream tasks, including image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2022-03-17T02:38:49Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Poisoning and Backdooring Contrastive Learning [26.093821359987224]
Contrastive learning methods like CLIP train on noisy and uncurated datasets.
We show that this practice makes backdoor and poisoning attacks a significant threat.
arXiv Detail & Related papers (2021-06-17T17:20:45Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.