A General Divergence Modeling Strategy for Salient Object Detection
- URL: http://arxiv.org/abs/2111.11827v1
- Date: Tue, 23 Nov 2021 12:47:51 GMT
- Title: A General Divergence Modeling Strategy for Salient Object Detection
- Authors: Xinyu Tian, Jing Zhang, Yuchao Dai
- Abstract summary: Salient object detection is subjective in nature, which implies that multiple estimations should be related to the same input image.
Most existing salient object detection models are deterministic following a point to point estimation, making them incapable to estimate the predictive distribution.
Although latent variable model based prediction network exists to model the prediction variants, the latent space based on the single clean saliency is less reliable in exploring the subjective nature of saliency.
- Score: 32.53501439077824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Salient object detection is subjective in nature, which implies that multiple
estimations should be related to the same input image. Most existing salient
object detection models are deterministic following a point to point estimation
learning pipeline, making them incapable to estimate the predictive
distribution. Although latent variable model based stochastic prediction
network exists to model the prediction variants, the latent space based on the
single clean saliency annotation is less reliable in exploring the subjective
nature of saliency, leading to less effective saliency "divergence modeling".
Given multiple saliency annotations, we introduce a general divergence modeling
strategy via random sampling, and apply our strategy to an ensemble based
framework and three latent variable model based solutions. Experimental results
indicate that our general divergence modeling strategy works superiorly in
exploring the subjective nature of saliency.
Related papers
- Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - Deep Non-Parametric Time Series Forecaster [19.800783133682955]
The proposed approach does not assume any parametric form for the predictive distribution and instead generates predictions by sampling from the empirical distribution according to a tunable strategy.
We develop a global version of the proposed method that automatically learns the sampling strategy by exploiting the information across multiple related time series.
arXiv Detail & Related papers (2023-12-22T12:46:30Z) - A performance characteristic curve for model evaluation: the application
in information diffusion prediction [3.8711489380602804]
We propose a metric based on information entropy to quantify the randomness in diffusion data, then identify a scaling pattern between the randomness and the prediction accuracy of the model.
Data points in the patterns by different sequence lengths, system sizes, and randomness all collapse into a single curve, capturing a model's inherent capability of making correct predictions.
The validity of the curve is tested by three prediction models in the same family, reaching conclusions in line with existing studies.
arXiv Detail & Related papers (2023-09-18T07:32:57Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Spatially-Varying Bayesian Predictive Synthesis for Flexible and
Interpretable Spatial Prediction [6.07227513262407]
We propose a novel methodology that captures spatially-varying model uncertainty, which we call spatial Bayesian predictive synthesis.
We show that our proposed spatial Bayesian predictive synthesis outperforms standard spatial models and advanced machine learning methods.
arXiv Detail & Related papers (2022-03-10T07:16:29Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Attentional Prototype Inference for Few-Shot Segmentation [128.45753577331422]
We propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot segmentation.
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
We conduct extensive experiments on four benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art prototype-based methods.
arXiv Detail & Related papers (2021-05-14T06:58:44Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - A comprehensive study on the prediction reliability of graph neural
networks for virtual screening [0.0]
We investigate the effects of model architectures, regularization methods, and loss functions on the prediction performance and reliability of classification results.
Our result highlights that correct choice of regularization and inference methods is evidently important to achieve high success rate.
arXiv Detail & Related papers (2020-03-17T10:13:31Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.