Unrolling Particles: Unsupervised Learning of Sampling Distributions
- URL: http://arxiv.org/abs/2110.02915v1
- Date: Wed, 6 Oct 2021 16:58:34 GMT
- Title: Unrolling Particles: Unsupervised Learning of Sampling Distributions
- Authors: Fernando Gama, Nicolas Zilberstein, Richard G. Baraniuk, Santiago
Segarra
- Abstract summary: Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
- Score: 102.72972137287728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Particle filtering is used to compute good nonlinear estimates of complex
systems. It samples trajectories from a chosen distribution and computes the
estimate as a weighted average. Easy-to-sample distributions often lead to
degenerate samples where only one trajectory carries all the weight, negatively
affecting the resulting performance of the estimate. While much research has
been done on the design of appropriate sampling distributions that would lead
to controlled degeneracy, in this paper our objective is to \emph{learn}
sampling distributions. Leveraging the framework of algorithm unrolling, we
model the sampling distribution as a multivariate normal, and we use neural
networks to learn both the mean and the covariance. We carry out unsupervised
training of the model to minimize weight degeneracy, relying only on the
observed measurements of the system. We show in simulations that the resulting
particle filter yields good estimates in a wide range of scenarios.
Related papers
- Liouville Flow Importance Sampler [2.3603292593876324]
We present the Liouville Flow Importance Sampler (LFIS), an innovative flow-based model for generating samples from unnormalized density functions.
LFIS learns a time-dependent velocity field that deterministically transports samples from a simple initial distribution to a complex target distribution.
We demonstrate the effectiveness of LFIS through its application to a range of benchmark problems, on many of which LFIS achieved state-of-the-art performance.
arXiv Detail & Related papers (2024-05-03T16:44:31Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Sampling weights of deep neural networks [1.2370077627846041]
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights and biases of fully-connected neural networks.
In a supervised learning context, no iterative optimization or gradient computations of internal network parameters are needed.
We prove that sampled networks are universal approximators.
arXiv Detail & Related papers (2023-06-29T10:13:36Z) - Unsupervised Learning of Sampling Distributions for Particle Filters [80.6716888175925]
We put forward four methods for learning sampling distributions from observed measurements.
Experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.
arXiv Detail & Related papers (2023-02-02T15:50:21Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Learning Optimal Flows for Non-Equilibrium Importance Sampling [13.469239537683299]
We develop a method to perform calculations based on generating samples from a simple base distribution, transporting them along the flow generated by a velocity field, and performing averages along these flowlines.
On the theory side we discuss how to tailor the velocity field to the target and establish general conditions under which the proposed estimator is a perfect estimator.
On the computational side we show how to use deep learning to represent the velocity field by a neural network and train it towards the zero variance optimum.
arXiv Detail & Related papers (2022-06-20T17:25:26Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Free Lunch for Few-shot Learning: Distribution Calibration [10.474018806591397]
We show that a simple logistic regression classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy on two datasets.
arXiv Detail & Related papers (2021-01-16T07:58:40Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.