Linear Complexity Gibbs Sampling for Generalized Labeled Multi-Bernoulli
Filtering
- URL: http://arxiv.org/abs/2211.16041v2
- Date: Thu, 28 Dec 2023 02:11:27 GMT
- Title: Linear Complexity Gibbs Sampling for Generalized Labeled Multi-Bernoulli
Filtering
- Authors: Changbeom Shim, Ba-Tuong Vo, Ba-Ngu Vo, Jonah Ong, Diluka Moratuwage
- Abstract summary: Generalized Labeled Multi-Bernoulli (GLMB) densities arise in a host of multi-object system applications analogous to Gaussians in single-object filtering.
We propose a tempered Gibbs sampler that exploits the structure of the GLMB filtering density to achieve an $mathcalO(T(P+M))$ complexity.
This innovation enables the GLMB filter implementation to be reduced from an $mathcalO(TP2M)$ complexity to $mathcalO(T(P+M+log T)+PM)
- Score: 4.186094981300538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalized Labeled Multi-Bernoulli (GLMB) densities arise in a host of
multi-object system applications analogous to Gaussians in single-object
filtering. However, computing the GLMB filtering density requires solving
NP-hard problems. To alleviate this computational bottleneck, we develop a
linear complexity Gibbs sampling framework for GLMB density computation.
Specifically, we propose a tempered Gibbs sampler that exploits the structure
of the GLMB filtering density to achieve an $\mathcal{O}(T(P+M))$ complexity,
where $T$ is the number of iterations of the algorithm, $P$ and $M$ are the
number hypothesized objects and measurements. This innovation enables the GLMB
filter implementation to be reduced from an $\mathcal{O}(TP^{2}M)$ complexity
to $\mathcal{O}(T(P+M+\log T)+PM)$. Moreover, the proposed framework provides
the flexibility for trade-offs between tracking performance and computational
load. Convergence of the proposed Gibbs sampler is established, and numerical
studies are presented to validate the proposed GLMB filter implementation.
Related papers
- On the query complexity of sampling from non-log-concave distributions [2.4253233571593547]
We study the problem of sampling from a $d$-dimensional distribution with density $p(x)propto e-f(x)$, which does not necessarily satisfy good isoperimetric conditions.
We show that for a wide range of parameters, sampling is strictly easier than optimization by a super-exponential factor in the dimension $d$.
arXiv Detail & Related papers (2025-02-10T06:54:16Z) - Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency [52.60557300927007]
We present a $textbfMA-OSMA$ algorithm to transfer the discrete submodular problem into a continuous optimization.
We also introduce a projection-free $textbfMA-OSEA$ algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution.
Our algorithms significantly improve the $(frac11+c)$-approximation provided by the state-of-the-art OSG algorithm.
arXiv Detail & Related papers (2025-02-07T15:57:56Z) - Parallel simulation for sampling under isoperimetry and score-based diffusion models [56.39904484784127]
As data size grows, reducing the iteration cost becomes an important goal.
Inspired by the success of the parallel simulation of the initial value problem in scientific computation, we propose parallel Picard methods for sampling tasks.
Our work highlights the potential advantages of simulation methods in scientific computation for dynamics-based sampling and diffusion models.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - Entropy contraction of the Gibbs sampler under log-concavity [0.16385815610837165]
We show that the random scan Gibbs sampler contracts in relative entropy and provide a sharp characterization of the associated contraction rate.
Our techniques are versatile and extend to Metropolis-within-Gibbs schemes and the Hit-and-Run algorithm.
arXiv Detail & Related papers (2024-10-01T16:50:36Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Complexity of zigzag sampling algorithm for strongly log-concave
distributions [6.336005544376984]
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
We prove that the zigzag sampling algorithm achieves $varepsilon$ error in chi-square divergence with a computational cost equivalent to $Obigl.
arXiv Detail & Related papers (2020-12-21T03:10:21Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z) - Oracle Lower Bounds for Stochastic Gradient Sampling Algorithms [39.746670539407084]
We consider the problem of sampling from a strongly log-concave density in $bbRd$.
We prove an information theoretic lower bound on the number of gradient queries of the log density needed.
arXiv Detail & Related papers (2020-02-01T23:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.