Robustness Evaluation and Adversarial Training of an Instance
Segmentation Model
- URL: http://arxiv.org/abs/2206.02539v1
- Date: Thu, 2 Jun 2022 02:18:09 GMT
- Title: Robustness Evaluation and Adversarial Training of an Instance
Segmentation Model
- Authors: Jacob Bond and Andrew Lingg
- Abstract summary: We show that probabilisitic local equivalence is able to successfully distinguish between standardly-trained and adversarially-trained models.
We show that probabilisitic local equivalence is able to successfully distinguish between standardly-trained and adversarially-trained models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To evaluate the robustness of non-classifier models, we propose probabilistic
local equivalence, based on the notion of randomized smoothing, as a way to
quantitatively evaluate the robustness of an arbitrary function. In addition,
to understand the effect of adversarial training on non-classifiers and to
investigate the level of robustness that can be obtained without degrading
performance on the training distribution, we apply Fast is Better than Free
adversarial training together with the TRADES robust loss to the training of an
instance segmentation network. In this direction, we were able to achieve a
symmetric best dice score of 0.85 on the TuSimple lane detection challenge,
outperforming the standardly-trained network's score of 0.82. Additionally, we
were able to obtain an F-measure of 0.49 on manipulated inputs, in contrast to
the standardly-trained network's score of 0. We show that probabilisitic local
equivalence is able to successfully distinguish between standardly-trained and
adversarially-trained models, providing another view of the improved robustness
of the adversarially-trained models.
Related papers
- MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers [41.56951365163419]
"MixedNUTS" is a training-free method where the output logits of a robust classifier are processed by nonlinear transformations with only three parameters.
MixedNUTS then converts the transformed logits into probabilities and mixes them as the overall output.
On CIFAR-10, CIFAR-100, and ImageNet datasets, experimental results with custom strong adaptive attacks demonstrate MixedNUTS's vastly improved accuracy and near-SOTA robustness.
arXiv Detail & Related papers (2024-02-03T21:12:36Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Outlier Robust Adversarial Training [57.06824365801612]
We introduce Outlier Robust Adversarial Training (ORAT) in this work.
ORAT is based on a bi-level optimization formulation of adversarial training with a robust rank-based loss function.
We show that the learning objective of ORAT satisfies the $mathcalH$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss.
arXiv Detail & Related papers (2023-09-10T21:36:38Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - Learning Sample Reweighting for Accuracy and Adversarial Robustness [15.591611864928659]
We propose a novel adversarial training framework that learns to reweight the loss associated with individual training samples based on a notion of class-conditioned margin.
Our approach consistently improves both clean and robust accuracy compared to related methods and state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-20T18:25:11Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - Adversarial Training with Rectified Rejection [114.83821848791206]
We propose to use true confidence (T-Con) as a certainty oracle, and learn to predict T-Con by rectifying confidence.
We prove that under mild conditions, a rectified confidence (R-Con) rejector and a confidence rejector can be coupled to distinguish any wrongly classified input from correctly classified ones.
arXiv Detail & Related papers (2021-05-31T08:24:53Z) - Certifiably-Robust Federated Adversarial Learning via Randomized
Smoothing [16.528628447356496]
In this paper, we incorporate smoothing techniques into federated adversarial training to enable data-private distributed learning.
Our experiments show that such an advanced federated adversarial learning framework can deliver models as robust as those trained by the centralized training.
arXiv Detail & Related papers (2021-03-30T02:19:45Z) - Regularized Training and Tight Certification for Randomized Smoothed
Classifier with Provable Robustness [15.38718018477333]
We derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart.
We also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability.
arXiv Detail & Related papers (2020-02-17T20:54:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.