MultiGuard: Provably Robust Multi-label Classification against
Adversarial Examples
- URL: http://arxiv.org/abs/2210.01111v1
- Date: Mon, 3 Oct 2022 17:50:57 GMT
- Title: MultiGuard: Provably Robust Multi-label Classification against
Adversarial Examples
- Authors: Jinyuan Jia and Wenjie Qu and Neil Zhenqiang Gong
- Abstract summary: MultiGuard is the first provably robust defense against adversarial examples to multi-label classification.
Our major theoretical contribution is that we show a certain number of ground truth labels of an input are provably in the set of labels predicted by our MultiGuard.
- Score: 67.0982378001551
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-label classification, which predicts a set of labels for an input, has
many applications. However, multiple recent studies showed that multi-label
classification is vulnerable to adversarial examples. In particular, an
attacker can manipulate the labels predicted by a multi-label classifier for an
input via adding carefully crafted, human-imperceptible perturbation to it.
Existing provable defenses for multi-class classification achieve sub-optimal
provable robustness guarantees when generalized to multi-label classification.
In this work, we propose MultiGuard, the first provably robust defense against
adversarial examples to multi-label classification. Our MultiGuard leverages
randomized smoothing, which is the state-of-the-art technique to build provably
robust classifiers. Specifically, given an arbitrary multi-label classifier,
our MultiGuard builds a smoothed multi-label classifier via adding random noise
to the input. We consider isotropic Gaussian noise in this work. Our major
theoretical contribution is that we show a certain number of ground truth
labels of an input are provably in the set of labels predicted by our
MultiGuard when the $\ell_2$-norm of the adversarial perturbation added to the
input is bounded. Moreover, we design an algorithm to compute our provable
robustness guarantees. Empirically, we evaluate our MultiGuard on VOC 2007,
MS-COCO, and NUS-WIDE benchmark datasets. Our code is available at:
\url{https://github.com/quwenjie/MultiGuard}
Related papers
- Showing Many Labels in Multi-label Classification Models: An Empirical Study of Adversarial Examples [1.7736843172485701]
We introduce a novel type of attacks, termed "Showing Many Labels"
Under "Showing Many Labels", iterative attacks perform significantly better than one-step attacks.
It is possible to show all labels in the dataset.
arXiv Detail & Related papers (2024-09-26T06:31:31Z) - UniDEC : Unified Dual Encoder and Classifier Training for Extreme Multi-Label Classification [42.36546066941635]
Extreme Multi-label Classification (XMC) involves predicting a subset of relevant labels from an extremely large label space.
This work proposes UniDEC, a novel end-to-end trainable framework which trains the dual encoder and classifier in together.
arXiv Detail & Related papers (2024-05-04T17:27:51Z) - Adopting the Multi-answer Questioning Task with an Auxiliary Metric for
Extreme Multi-label Text Classification Utilizing the Label Hierarchy [10.87653109398961]
This paper adopts the multi-answer questioning task for extreme multi-label classification.
This study adopts the proposed method and the evaluation metric to the legal domain.
arXiv Detail & Related papers (2023-03-02T08:40:31Z) - Large Loss Matters in Weakly Supervised Multi-Label Classification [50.262533546999045]
We first regard unobserved labels as negative labels, casting the W task into noisy multi-label classification.
We propose novel methods for W which reject or correct the large loss samples to prevent model from memorizing the noisy label.
Our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.
arXiv Detail & Related papers (2022-06-08T08:30:24Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - Interaction Matching for Long-Tail Multi-Label Classification [57.262792333593644]
We present an elegant and effective approach for addressing limitations in existing multi-label classification models.
By performing soft n-gram interaction matching, we match labels with natural language descriptions.
arXiv Detail & Related papers (2020-05-18T15:27:55Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.