Probabilistic Margins for Instance Reweighting in Adversarial Training
- URL: http://arxiv.org/abs/2106.07904v1
- Date: Tue, 15 Jun 2021 06:37:55 GMT
- Title: Probabilistic Margins for Instance Reweighting in Adversarial Training
- Authors: Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu,
Mingyuan Zhou, Masashi Sugiyama
- Abstract summary: We propose three types of probabilistic margin (PM) for measuring closeness and reweighting adversarial data.
Though different PMs capture different geometric properties, all three PMs share a negative correlation with the vulnerability of data.
Experiments demonstrate that PMs are reliable measurements and PM-based reweighting methods outperform state-of-the-art methods.
- Score: 151.34753039197832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reweighting adversarial data during training has been recently shown to
improve adversarial robustness, where data closer to the current decision
boundaries are regarded as more critical and given larger weights. However,
existing methods measuring the closeness are not very reliable: they are
discrete and can take only a few values, and they are path-dependent, i.e.,
they may change given the same start and end points with different attack
paths. In this paper, we propose three types of probabilistic margin (PM),
which are continuous and path-independent, for measuring the aforementioned
closeness and reweighting adversarial data. Specifically, a PM is defined as
the difference between two estimated class-posterior probabilities, e.g., such
the probability of the true label minus the probability of the most confusing
label given some natural data. Though different PMs capture different geometric
properties, all three PMs share a negative correlation with the vulnerability
of data: data with larger/smaller PMs are safer/riskier and should have
smaller/larger weights. Experiments demonstrate that PMs are reliable
measurements and PM-based reweighting methods outperform state-of-the-art
methods.
Related papers
- What Are the Odds? Improving the foundations of Statistical Model Checking [3.789219860006095]
Markov decision processes (MDPs) are a fundamental model for decision making under uncertainty.
Traditionally verification algorithms assume exact knowledge of the probabilities that govern the behaviour of an MDP.
We propose specialised approaches that exploit our knowledge of the MDP.
arXiv Detail & Related papers (2024-04-08T11:47:46Z) - Robust Anytime Learning of Markov Decision Processes [8.799182983019557]
In data-driven applications, deriving precise probabilities from limited data introduces statistical errors.
Uncertain MDPs (uMDPs) do not require precise probabilities but instead use so-called uncertainty sets in the transitions.
We propose a robust anytime-learning approach that combines a dedicated Bayesian inference scheme with the computation of robust policies.
arXiv Detail & Related papers (2022-05-31T14:29:55Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - Noise-Resistant Deep Metric Learning with Probabilistic Instance
Filtering [59.286567680389766]
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks.
We propose Probabilistic Ranking-based Instance Selection with Memory (PRISM) approach for DML.
PRISM calculates the probability of a label being clean, and filters out potentially noisy samples.
arXiv Detail & Related papers (2021-08-03T12:15:25Z) - Maximum Mean Discrepancy Test is Aware of Adversarial Attacks [122.51040127438324]
The maximum mean discrepancy (MMD) test could in principle detect any distributional discrepancy between two datasets.
It has been shown that the MMD test is unaware of adversarial attacks.
arXiv Detail & Related papers (2020-10-22T03:42:12Z) - DEMI: Discriminative Estimator of Mutual Information [5.248805627195347]
Estimating mutual information between continuous random variables is often intractable and challenging for high-dimensional data.
Recent progress has leveraged neural networks to optimize variational lower bounds on mutual information.
Our approach is based on training a classifier that provides the probability that a data sample pair is drawn from the joint distribution.
arXiv Detail & Related papers (2020-10-05T04:19:27Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z) - Neural Methods for Point-wise Dependency Estimation [129.93860669802046]
We focus on estimating point-wise dependency (PD), which quantitatively measures how likely two outcomes co-occur.
We demonstrate the effectiveness of our approaches in 1) MI estimation, 2) self-supervised representation learning, and 3) cross-modal retrieval task.
arXiv Detail & Related papers (2020-06-09T23:26:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.