Showing Many Labels in Multi-label Classification Models: An Empirical Study of Adversarial Examples
- URL: http://arxiv.org/abs/2409.17568v1
- Date: Thu, 26 Sep 2024 06:31:31 GMT
- Title: Showing Many Labels in Multi-label Classification Models: An Empirical Study of Adversarial Examples
- Authors: Yujiang Liu, Wenjian Luo, Zhijian Chen, Muhammad Luqman Naseem,
- Abstract summary: We introduce a novel type of attacks, termed "Showing Many Labels"
Under "Showing Many Labels", iterative attacks perform significantly better than one-step attacks.
It is possible to show all labels in the dataset.
- Score: 1.7736843172485701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of Deep Neural Networks (DNNs), they have been applied in numerous fields. However, research indicates that DNNs are susceptible to adversarial examples, and this is equally true in the multi-label domain. To further investigate multi-label adversarial examples, we introduce a novel type of attacks, termed "Showing Many Labels". The objective of this attack is to maximize the number of labels included in the classifier's prediction results. In our experiments, we select nine attack algorithms and evaluate their performance under "Showing Many Labels". Eight of the attack algorithms were adapted from the multi-class environment to the multi-label environment, while the remaining one was specifically designed for the multi-label environment. We choose ML-LIW and ML-GCN as target models and train them on four popular multi-label datasets: VOC2007, VOC2012, NUS-WIDE, and COCO. We record the success rate of each algorithm when it shows the expected number of labels in eight different scenarios. Experimental results indicate that under the "Showing Many Labels", iterative attacks perform significantly better than one-step attacks. Moreover, it is possible to show all labels in the dataset.
Related papers
- Multi-Label Feature Selection Using Adaptive and Transformed Relevance [0.0]
This paper presents a novel information-theoretical filter-based multi-label feature selection, called ATR, with a new function.
ATR ranks features considering individual labels as well as abstract label space discriminative powers.
Our experiments affirm the scalability of ATR for benchmarks characterized by extensive feature and label spaces.
arXiv Detail & Related papers (2023-09-26T09:01:38Z) - Disambiguated Attention Embedding for Multi-Instance Partial-Label
Learning [68.56193228008466]
In many real-world tasks, the concerned objects can be represented as a multi-instance bag associated with a candidate label set.
Existing MIPL approach follows the instance-space paradigm by assigning augmented candidate label sets of bags to each instance and aggregating bag-level labels from instance-level labels.
We propose an intuitive algorithm named DEMIPL, i.e., Disambiguated attention Embedding for Multi-Instance Partial-Label learning.
arXiv Detail & Related papers (2023-05-26T13:25:17Z) - Understanding Label Bias in Single Positive Multi-Label Learning [20.09309971112425]
It is possible to train effective multi-labels using only one positive label per image.
Standard benchmarks for SPML are derived from traditional multi-label classification datasets.
This work introduces protocols for studying label bias in SPML and provides new empirical results.
arXiv Detail & Related papers (2023-05-24T21:41:08Z) - Deep Partial Multi-Label Learning with Graph Disambiguation [27.908565535292723]
We propose a novel deep Partial multi-Label model with grAph-disambIguatioN (PLAIN)
Specifically, we introduce the instance-level and label-level similarities to recover label confidences.
At each training epoch, labels are propagated on the instance and label graphs to produce relatively accurate pseudo-labels.
arXiv Detail & Related papers (2023-05-10T04:02:08Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly
Supervised Video Anomaly Detection [149.23913018423022]
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels.
Two-stage self-training methods have achieved significant improvements by self-generating pseudo labels.
We propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training.
arXiv Detail & Related papers (2022-12-08T05:53:53Z) - MultiGuard: Provably Robust Multi-label Classification against
Adversarial Examples [67.0982378001551]
MultiGuard is the first provably robust defense against adversarial examples to multi-label classification.
Our major theoretical contribution is that we show a certain number of ground truth labels of an input are provably in the set of labels predicted by our MultiGuard.
arXiv Detail & Related papers (2022-10-03T17:50:57Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - CLS: Cross Labeling Supervision for Semi-Supervised Learning [9.929229055862491]
Cross Labeling Supervision ( CLS) is a framework that generalizes the typical pseudo-labeling process.
CLS allows the creation of both pseudo and complementary labels to support both positive and negative learning.
arXiv Detail & Related papers (2022-02-17T08:09:40Z) - Integrating Unsupervised Clustering and Label-specific Oversampling to
Tackle Imbalanced Multi-label Data [13.888344214818733]
Clustering is performed to find out the key distinct and locally connected regions of a multi-label dataset.
Only the minority points within a cluster are used to generate the synthetic minority points that are used for oversampling.
Experiments using 12 multi-label datasets and several multi-label algorithms show that the proposed method performed very well.
arXiv Detail & Related papers (2021-09-25T19:00:00Z) - A Study on the Autoregressive and non-Autoregressive Multi-label
Learning [77.11075863067131]
We propose a self-attention based variational encoder-model to extract the label-label and label-feature dependencies jointly.
Our model can therefore be used to predict all labels in parallel while still including both label-label and label-feature dependencies.
arXiv Detail & Related papers (2020-12-03T05:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.