T$_k$ML-AP: Adversarial Attacks to Top-$k$ Multi-Label Learning
- URL: http://arxiv.org/abs/2108.00146v1
- Date: Sat, 31 Jul 2021 04:38:19 GMT
- Title: T$_k$ML-AP: Adversarial Attacks to Top-$k$ Multi-Label Learning
- Authors: Shu Hu, Lipeng Ke, Xin Wang, Siwei Lyu
- Abstract summary: We develop methods to create adversarial perturbations that can be used to attack top-$k$ multi-label learning-based image annotation systems.
Our methods reduce the performance of state-of-the-art top-$k$ multi-label learning methods under both untargeted and targeted attacks.
- Score: 36.33146863659193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Top-$k$ multi-label learning, which returns the top-$k$ predicted labels from
an input, has many practical applications such as image annotation, document
analysis, and web search engine. However, the vulnerabilities of such
algorithms with regards to dedicated adversarial perturbation attacks have not
been extensively studied previously. In this work, we develop methods to create
adversarial perturbations that can be used to attack top-$k$ multi-label
learning-based image annotation systems (TkML-AP). Our methods explicitly
consider the top-$k$ ranking relation and are based on novel loss functions.
Experimental evaluations on large-scale benchmark datasets including PASCAL VOC
and MS COCO demonstrate the effectiveness of our methods in reducing the
performance of state-of-the-art top-$k$ multi-label learning methods, under
both untargeted and targeted attacks.
Related papers
- When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Inconsistent Few-Shot Relation Classification via Cross-Attentional
Prototype Networks with Contrastive Learning [16.128652726698522]
We propose Prototype Network-based cross-attention contrastive learning (ProtoCACL) to capture the rich mutual interactions between the support set and query set.
Experimental results demonstrate that our ProtoCACL can outperform the state-of-the-art baseline model under both inconsistent $K$ and inconsistent $N$ settings.
arXiv Detail & Related papers (2021-10-13T07:47:13Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Effective Evaluation of Deep Active Learning on Image Classification
Tasks [10.27095298129151]
We present a unified re-implementation of state-of-the-art active learning algorithms in the context of image classification.
On the positive side, we show that AL techniques are 2x to 4x more label-efficient compared to RS with the use of data augmentation.
arXiv Detail & Related papers (2021-06-16T23:29:39Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Active Learning Under Malicious Mislabeling and Poisoning Attacks [2.4660652494309936]
Deep neural networks usually require large labeled datasets for training.
Most of these data are unlabeled and are vulnerable to data poisoning attacks.
In this paper, we develop an efficient active learning method that requires fewer labeled instances.
arXiv Detail & Related papers (2021-01-01T03:43:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.