Tightening the Approximation Error of Adversarial Risk with Auto Loss
Function Search
- URL: http://arxiv.org/abs/2111.05063v1
- Date: Tue, 9 Nov 2021 11:47:43 GMT
- Title: Tightening the Approximation Error of Adversarial Risk with Auto Loss
Function Search
- Authors: Pengfei Xia, Ziqiang Li, and Bin Li
- Abstract summary: A common type of evaluation is to approximate the adversarial risk of a model as a robustness indicator.
We propose AutoLoss-AR, the first method for searching loss functions for tightening the error.
The results demonstrate the effectiveness of the proposed methods.
- Score: 12.263913626161155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous studies have demonstrated that deep neural networks are easily
misled by adversarial examples. Effectively evaluating the adversarial
robustness of a model is important for its deployment in practical
applications. Currently, a common type of evaluation is to approximate the
adversarial risk of a model as a robustness indicator by constructing malicious
instances and executing attacks. Unfortunately, there is an error (gap) between
the approximate value and the true value. Previous studies manually design
attack methods to achieve a smaller error, which is inefficient and may miss a
better solution. In this paper, we establish the tightening of the
approximation error as an optimization problem and try to solve it with an
algorithm. More specifically, we first analyze that replacing the non-convex
and discontinuous 0-1 loss with a surrogate loss, a necessary compromise in
calculating the approximation, is one of the main reasons for the error. Then
we propose AutoLoss-AR, the first method for searching loss functions for
tightening the approximation error of adversarial risk. Extensive experiments
are conducted in multiple settings. The results demonstrate the effectiveness
of the proposed method: the best-discovered loss functions outperform the
handcrafted baseline by 0.9%-2.9% and 0.7%-2.0% on MNIST and CIFAR-10,
respectively. Besides, we also verify that the searched losses can be
transferred to other settings and explore why they are better than the baseline
by visualizing the local loss landscape.
Related papers
- A Huber loss-based super learner with applications to healthcare
expenditures [0.0]
We propose a super learner based on the Huber loss, a "robust" loss function that combines squared error loss with absolute loss to downweight.
We show that the proposed method can be used both directly to optimize Huber risk, as well as in finite-sample settings.
arXiv Detail & Related papers (2022-05-13T19:57:50Z) - Relational Surrogate Loss Learning [41.61184221367546]
This paper revisits the surrogate loss learning, where a deep neural network is employed to approximate the evaluation metrics.
In this paper, we show that directly maintaining the relation of models between surrogate losses and metrics suffices.
Our method is much easier to optimize and enjoys significant efficiency and performance gains.
arXiv Detail & Related papers (2022-02-26T17:32:57Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Error Controlled Actor-Critic [7.936003142729818]
On error of value function inevitably causes an overestimation phenomenon and has a negative impact on the convergence of the algorithms.
We propose Error Controlled Actor-critic which ensures confining the approximation error in value function.
arXiv Detail & Related papers (2021-09-06T14:51:20Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Mean-Shifted Contrastive Loss for Anomaly Detection [34.97652735163338]
We propose a new loss function which can overcome failure modes of both center-loss and contrastive-loss methods.
Our improvements yield a new anomaly detection approach, based on $textitMean-Shifted Contrastive Loss$.
Our method achieves state-of-the-art anomaly detection performance on multiple benchmarks including $97.5%$ ROC-AUC.
arXiv Detail & Related papers (2021-06-07T17:58:03Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Regressive Domain Adaptation for Unsupervised Keypoint Detection [67.2950306888855]
Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
arXiv Detail & Related papers (2021-03-10T16:45:22Z) - Loss Function Discovery for Object Detection via Convergence-Simulation
Driven Search [101.73248560009124]
We propose an effective convergence-simulation driven evolutionary search algorithm, CSE-Autoloss, for speeding up the search progress.
We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses.
Our experiments show that the best-discovered loss function combinations outperform default combinations by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors.
arXiv Detail & Related papers (2021-02-09T08:34:52Z) - Second-Moment Loss: A Novel Regression Objective for Improved
Uncertainties [7.766663822644739]
Quantification of uncertainty is one of the most promising approaches to establish safe machine learning.
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
We propose a new objective, referred to as second-moment loss ( UCI), to address this issue.
arXiv Detail & Related papers (2020-12-23T14:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.