When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning
- URL: http://arxiv.org/abs/2309.00007v2
- Date: Tue, 5 Sep 2023 14:04:14 GMT
- Title: When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning
- Authors: Yuchen Sun, Qianqian Xu, Zitai Wang, and Qingming Huang
- Abstract summary: A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
- Score: 83.8758881342346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the great success of deep neural networks, adversarial learning has
received widespread attention in various studies, ranging from multi-class
learning to multi-label learning. However, existing adversarial attacks toward
multi-label learning only pursue the traditional visual imperceptibility but
ignore the new perceptible problem coming from measures such as Precision@$k$
and mAP@$k$. Specifically, when a well-trained multi-label classifier performs
far below the expectation on some samples, the victim can easily realize that
this performance degeneration stems from attack, rather than the model itself.
Therefore, an ideal multi-labeling adversarial attack should manage to not only
deceive visual perception but also evade monitoring of measures. To this end,
this paper first proposes the concept of measure imperceptibility. Then, a
novel loss function is devised to generate such adversarial perturbations that
could achieve both visual and measure imperceptibility. Furthermore, an
efficient algorithm, which enjoys a convex objective, is established to
optimize this objective. Finally, extensive experiments on large-scale
benchmark datasets, such as PASCAL VOC 2012, MS COCO, and NUS WIDE, demonstrate
the superiority of our proposed method in attacking the top-$k$ multi-label
systems.
Related papers
- Learning Transferable Adversarial Robust Representations via Multi-view
Consistency [57.73073964318167]
We propose a novel meta-adversarial multi-view representation learning framework with dual encoders.
We demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains.
arXiv Detail & Related papers (2022-10-19T11:48:01Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - T$_k$ML-AP: Adversarial Attacks to Top-$k$ Multi-Label Learning [36.33146863659193]
We develop methods to create adversarial perturbations that can be used to attack top-$k$ multi-label learning-based image annotation systems.
Our methods reduce the performance of state-of-the-art top-$k$ multi-label learning methods under both untargeted and targeted attacks.
arXiv Detail & Related papers (2021-07-31T04:38:19Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - An Effective Baseline for Robustness to Distributional Shift [5.627346969563955]
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems.
We present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention.
arXiv Detail & Related papers (2021-05-15T00:46:11Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.