Learning Posterior and Prior for Uncertainty Modeling in Person
Re-Identification
- URL: http://arxiv.org/abs/2007.08785v1
- Date: Fri, 17 Jul 2020 07:20:39 GMT
- Title: Learning Posterior and Prior for Uncertainty Modeling in Person
Re-Identification
- Authors: Yan Zhang, Zhilin Zheng, Binyu He, Li Sun
- Abstract summary: We learn the sample posterior and the class prior distribution in the latent space, so that not only representative features but also the uncertainty can be built by the model.
Experiments have been carried out on Market1501, DukeMTMC, MARS and noisy dataset as well.
- Score: 11.651410633259543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data uncertainty in practical person reID is ubiquitous, hence it requires
not only learning the discriminative features, but also modeling the
uncertainty based on the input. This paper proposes to learn the sample
posterior and the class prior distribution in the latent space, so that not
only representative features but also the uncertainty can be built by the
model. The prior reflects the distribution of all data in the same class, and
it is the trainable model parameters. While the posterior is the probability
density of a single sample, so it is actually the feature defined on the input.
We assume that both of them are in Gaussian form. To simultaneously model them,
we put forward a distribution loss, which measures the KL divergence from the
posterior to the priors in the manner of supervised learning. In addition, we
assume that the posterior variance, which is essentially the uncertainty, is
supposed to have the second-order characteristic. Therefore, a $\Sigma-$net is
proposed to compute it by the high order representation from its input.
Extensive experiments have been carried out on Market1501, DukeMTMC, MARS and
noisy dataset as well.
Related papers
- Concentration of Measure for Distributions Generated via Diffusion Models [16.868125342684603]
We show via a combination of mathematical arguments and empirical evidence that data distributions sampled from diffusion models satisfy a Concentration of Measure Property.
This implies that such models are quite restrictive and gives an explanation for a fact previously observed in the literature that conventional diffusion models cannot capture "heavy-tailed" data.
arXiv Detail & Related papers (2025-01-13T23:13:01Z) - Universality in Transfer Learning for Linear Models [18.427215139020625]
We study the problem of transfer learning in linear models for both regression and binary classification.
We provide an exact and rigorous analysis and relate generalization errors (in regression) and classification errors (in binary classification) for the pretrained and fine-tuned models.
arXiv Detail & Related papers (2024-10-03T03:09:09Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning [49.94607673097326]
We propose a highly adaptable framework, designated as SimPro, which does not rely on any predefined assumptions about the distribution of unlabeled data.
Our framework, grounded in a probabilistic model, innovatively refines the expectation-maximization algorithm.
Our method showcases consistent state-of-the-art performance across diverse benchmarks and data distribution scenarios.
arXiv Detail & Related papers (2024-02-21T03:39:04Z) - Gaussian Process Probes (GPP) for Uncertainty-Aware Probing [61.91898698128994]
We introduce a unified and simple framework for probing and measuring uncertainty about concepts represented by models.
Our experiments show it can (1) probe a model's representations of concepts even with a very small number of examples, (2) accurately measure both epistemic uncertainty (how confident the probe is) and aleatory uncertainty (how fuzzy the concepts are to the model), and (3) detect out of distribution data using those uncertainty measures as well as classic methods do.
arXiv Detail & Related papers (2023-05-29T17:00:16Z) - Martingale Posterior Neural Processes [14.913697718688931]
A Neural Process (NP) estimates a process implicitly defined with neural networks given a stream of data.
We take a different approach based on the martingale posterior, a recently developed alternative to Bayesian inference.
We show that the uncertainty in the generated future data actually corresponds to the uncertainty of the implicitly defined Bayesian posteriors.
arXiv Detail & Related papers (2023-04-19T05:58:18Z) - Performative Prediction with Neural Networks [24.880495520422]
performative prediction is a framework for learning models that influence the data they intend to predict.
Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the model's parameters.
In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems.
arXiv Detail & Related papers (2023-04-14T01:12:48Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z) - not-MIWAE: Deep Generative Modelling with Missing not at Random Data [21.977065542645082]
We present an approach for building and fitting deep latent variable models (DLVMs) in cases where the missing process is dependent on the missing data.
Specifically, a deep neural network enables us to flexibly model the conditional distribution of the missingness pattern given the data.
We show on various kinds of data sets and missingness patterns that explicitly modelling the missing process can be invaluable.
arXiv Detail & Related papers (2020-06-23T10:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.