A Majority Invariant Approach to Patch Robustness Certification for Deep
Learning Models
- URL: http://arxiv.org/abs/2308.00452v2
- Date: Thu, 7 Sep 2023 12:22:28 GMT
- Title: A Majority Invariant Approach to Patch Robustness Certification for Deep
Learning Models
- Authors: Qilin Zhou, Zhengyuan Wei, Haipeng Wang, and W.K. Chan
- Abstract summary: MajorCert finds all possible label sets manipulatable by the same patch region on the same sample.
It enumerates their combinations element-wise, and then checks whether the majority invariant of all these combinations is intact to certify samples.
- Score: 2.6499018693213316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patch robustness certification ensures no patch within a given bound on a
sample can manipulate a deep learning model to predict a different label.
However, existing techniques cannot certify samples that cannot meet their
strict bars at the classifier or patch region levels. This paper proposes
MajorCert. MajorCert firstly finds all possible label sets manipulatable by the
same patch region on the same sample across the underlying classifiers, then
enumerates their combinations element-wise, and finally checks whether the
majority invariant of all these combinations is intact to certify samples.
Related papers
- Scalable and Precise Patch Robustness Certification for Deep Learning Models with Top-k Predictions [2.6499018693213316]
Patch robustness certification is an emerging verification approach for defending against adversarial patch attacks.<n>We propose CostCert, a voting-based certified recovery defender.<n>We show that CostCert significantly outperforms the current state-of-the-art defender PatchGuard.
arXiv Detail & Related papers (2025-07-31T08:31:59Z) - AllMatch: Exploiting All Unlabeled Data for Semi-Supervised Learning [5.0823084858349485]
We present a novel SSL algorithm named AllMatch, which achieves improved pseudo-label accuracy and a 100% utilization ratio for the unlabeled data.
The results demonstrate that AllMatch consistently outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-06-22T06:59:52Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Localized Randomized Smoothing for Collective Robustness Certification [60.83383487495282]
We propose a more general collective robustness certificate for all types of models.
We show that this approach is beneficial for the larger class of softly local models.
The certificate is based on our novel localized randomized smoothing approach.
arXiv Detail & Related papers (2022-10-28T14:10:24Z) - Towards Evading the Limits of Randomized Smoothing: A Theoretical
Analysis [74.85187027051879]
We show that it is possible to approximate the optimal certificate with arbitrary precision, by probing the decision boundary with several noise distributions.
This result fosters further research on classifier-specific certification and demonstrates that randomized smoothing is still worth investigating.
arXiv Detail & Related papers (2022-06-03T17:48:54Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z) - Multi-Complementary and Unlabeled Learning for Arbitrary Losses and
Models [6.177038245239757]
We propose a novel multi-complementary and unlabeled learning framework.
We first give an unbiased estimator of the classification risk from samples with multiple complementary labels.
We then further improve the estimator by incorporating unlabeled samples into the risk formulation.
arXiv Detail & Related papers (2020-01-13T13:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.