Linear Complexity Gibbs Sampling for Generalized Labeled Multi-Bernoulli
Filtering
- URL: http://arxiv.org/abs/2211.16041v2
- Date: Thu, 28 Dec 2023 02:11:27 GMT
- Title: Linear Complexity Gibbs Sampling for Generalized Labeled Multi-Bernoulli
Filtering
- Authors: Changbeom Shim, Ba-Tuong Vo, Ba-Ngu Vo, Jonah Ong, Diluka Moratuwage
- Abstract summary: Generalized Labeled Multi-Bernoulli (GLMB) densities arise in a host of multi-object system applications analogous to Gaussians in single-object filtering.
We propose a tempered Gibbs sampler that exploits the structure of the GLMB filtering density to achieve an $mathcalO(T(P+M))$ complexity.
This innovation enables the GLMB filter implementation to be reduced from an $mathcalO(TP2M)$ complexity to $mathcalO(T(P+M+log T)+PM)
- Score: 4.186094981300538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalized Labeled Multi-Bernoulli (GLMB) densities arise in a host of
multi-object system applications analogous to Gaussians in single-object
filtering. However, computing the GLMB filtering density requires solving
NP-hard problems. To alleviate this computational bottleneck, we develop a
linear complexity Gibbs sampling framework for GLMB density computation.
Specifically, we propose a tempered Gibbs sampler that exploits the structure
of the GLMB filtering density to achieve an $\mathcal{O}(T(P+M))$ complexity,
where $T$ is the number of iterations of the algorithm, $P$ and $M$ are the
number hypothesized objects and measurements. This innovation enables the GLMB
filter implementation to be reduced from an $\mathcal{O}(TP^{2}M)$ complexity
to $\mathcal{O}(T(P+M+\log T)+PM)$. Moreover, the proposed framework provides
the flexibility for trade-offs between tracking performance and computational
load. Convergence of the proposed Gibbs sampler is established, and numerical
studies are presented to validate the proposed GLMB filter implementation.
Related papers
- Entropy contraction of the Gibbs sampler under log-concavity [0.16385815610837165]
We show that the random scan Gibbs sampler contracts in relative entropy and provide a sharp characterization of the associated contraction rate.
Our techniques are versatile and extend to Metropolis-within-Gibbs schemes and the Hit-and-Run algorithm.
arXiv Detail & Related papers (2024-10-01T16:50:36Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Generalization Bounds for Stochastic Gradient Descent via Localized
$\varepsilon$-Covers [16.618918548497223]
We propose a new covering technique localized for the trajectories of SGD.
This localization provides an algorithm-specific clustering measured by the bounds number.
We derive these results in various contexts and improve the known state-of-the-art label rates.
arXiv Detail & Related papers (2022-09-19T12:11:07Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Super-resolution GANs of randomly-seeded fields [68.8204255655161]
We propose a novel super-resolution generative adversarial network (GAN) framework to estimate field quantities from random sparse sensors.
The algorithm exploits random sampling to provide incomplete views of the high-resolution underlying distributions.
The proposed technique is tested on synthetic databases of fluid flow simulations, ocean surface temperature distributions measurements, and particle image velocimetry data.
arXiv Detail & Related papers (2022-02-23T18:57:53Z) - Complexity of zigzag sampling algorithm for strongly log-concave
distributions [6.336005544376984]
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
We prove that the zigzag sampling algorithm achieves $varepsilon$ error in chi-square divergence with a computational cost equivalent to $Obigl.
arXiv Detail & Related papers (2020-12-21T03:10:21Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z) - Oracle Lower Bounds for Stochastic Gradient Sampling Algorithms [39.746670539407084]
We consider the problem of sampling from a strongly log-concave density in $bbRd$.
We prove an information theoretic lower bound on the number of gradient queries of the log density needed.
arXiv Detail & Related papers (2020-02-01T23:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.