Revisiting Saliency Metrics: Farthest-Neighbor Area Under Curve
- URL: http://arxiv.org/abs/2002.10540v1
- Date: Mon, 24 Feb 2020 20:55:42 GMT
- Title: Revisiting Saliency Metrics: Farthest-Neighbor Area Under Curve
- Authors: Sen Jia and Neil D.B. Bruce
- Abstract summary: Saliency detection has been widely studied because it plays an important role in various vision applications.
It is difficult to evaluate saliency systems because each measure has its own bias.
We propose a new saliency metric based on the AUC property, which aims at sampling a more directional negative set for evaluation.
- Score: 23.334584322129142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Saliency detection has been widely studied because it plays an important role
in various vision applications, but it is difficult to evaluate saliency
systems because each measure has its own bias. In this paper, we first revisit
the problem of applying the widely used saliency metrics on modern
Convolutional Neural Networks(CNNs). Our investigation shows the saliency
datasets have been built based on different choices of parameters and CNNs are
designed to fit a dataset-specific distribution. Secondly, we show that the
Shuffled Area Under Curve(S-AUC) metric still suffers from spatial biases. We
propose a new saliency metric based on the AUC property, which aims at sampling
a more directional negative set for evaluation, denoted as Farthest-Neighbor
AUC(FN-AUC). We also propose a strategy to measure the quality of the sampled
negative set. Our experiment shows FN-AUC can measure spatial biases, central
and peripheral, more effectively than S-AUC without penalizing the fixation
locations. Thirdly, we propose a global smoothing function to overcome the
problem of few value degrees (output quantization) in computing AUC metrics.
Comparing with random noise, our smooth function can create unique values
without losing the relative saliency relationship.
Related papers
- OCMG-Net: Neural Oriented Normal Refinement for Unstructured Point Clouds [18.234146052486054]
We present a robust refinement method for estimating oriented normals from unstructured point clouds.
Our framework incorporates sign orientation and data augmentation in the feature space to refine the initial oriented normals.
To address the issue of noise-caused direction inconsistency existing in previous approaches, we introduce a new metric called the Chamfer Normal Distance.
arXiv Detail & Related papers (2024-09-02T09:30:02Z) - Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation [91.83820250747935]
Pseudo-label noise is mainly contained in unstable samples in which predictions of most pixels undergo significant variations during self-training.
We introduce the Stable Neighbor Denoising (SND) approach, which effectively discovers highly correlated stable and unstable samples.
SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings.
arXiv Detail & Related papers (2024-06-10T21:44:52Z) - Federated Nonparametric Hypothesis Testing with Differential Privacy Constraints: Optimal Rates and Adaptive Tests [5.3595271893779906]
Federated learning has attracted significant recent attention due to its applicability across a wide range of settings where data is collected and analyzed across disparate locations.
We study federated nonparametric goodness-of-fit testing in the white-noise-with-drift model under distributed differential privacy (DP) constraints.
arXiv Detail & Related papers (2024-06-10T19:25:19Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Studying inductive biases in image classification task [0.0]
Self-attention (SA) structures have locally independent filters and can use large kernels, which contradicts the previously popular convolutional neural networks (CNNs)
We show that context awareness was the crucial property; however, large local information was not necessary to construct CA parameters.
arXiv Detail & Related papers (2022-10-31T08:43:26Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Robust One Round Federated Learning with Predictive Space Bayesian
Inference [19.533268415744338]
We show how the global predictive posterior can be approximated using client predictive posteriors.
We present an algorithm based on this idea, which performs MCMC sampling at each client to obtain an estimate of the local posterior, and then aggregates these in one round to obtain a global ensemble model.
arXiv Detail & Related papers (2022-06-20T01:06:59Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal
Sample and Communication Complexities for Federated Learning [58.6792963686231]
Federated Learning (FL) refers to the paradigm where multiple worker nodes (WNs) build a joint model by using local data.
It is not clear how to choose the WNs' minimum update directions, the first minibatch sizes, and the local update frequency.
We show that there is a trade-off curve between local update frequencies and local mini sizes, on which the above complexities can be maintained.
arXiv Detail & Related papers (2021-06-19T06:13:45Z) - Why have a Unified Predictive Uncertainty? Disentangling it using Deep
Split Ensembles [39.29536042476913]
Understanding and quantifying uncertainty in black box Neural Networks (NNs) is critical when deployed in real-world settings such as healthcare.
We propose a conceptually simple non-Bayesian approach, deep split ensemble, to disentangle the predictive uncertainties.
arXiv Detail & Related papers (2020-09-25T19:15:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.