Tracking Dynamic Gaussian Density with a Theoretically Optimal Sliding
Window Approach
- URL: http://arxiv.org/abs/2403.07207v1
- Date: Mon, 11 Mar 2024 23:21:26 GMT
- Title: Tracking Dynamic Gaussian Density with a Theoretically Optimal Sliding
Window Approach
- Authors: Yinsong Wang, Yu Ding, Shahin Shahrampour
- Abstract summary: We study the exact mean integrated squared error (MISE) of "sliding window" Kernel Density Estimators for evolving densities.
We present empirical evidence with synthetic datasets to show that our weighting scheme improves the tracking performance.
- Score: 16.45003200150227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic density estimation is ubiquitous in many applications, including
computer vision and signal processing. One popular method to tackle this
problem is the "sliding window" kernel density estimator. There exist various
implementations of this method that use heuristically defined weight sequences
for the observed data. The weight sequence, however, is a key aspect of the
estimator affecting the tracking performance significantly. In this work, we
study the exact mean integrated squared error (MISE) of "sliding window"
Gaussian Kernel Density Estimators for evolving Gaussian densities. We provide
a principled guide for choosing the optimal weight sequence by theoretically
characterizing the exact MISE, which can be formulated as constrained quadratic
programming. We present empirical evidence with synthetic datasets to show that
our weighting scheme indeed improves the tracking performance compared to
heuristic approaches.
Related papers
- Posterior Contraction Rates for Mat\'ern Gaussian Processes on
Riemannian Manifolds [51.68005047958965]
We show that intrinsic Gaussian processes can achieve better performance in practice.
Our work shows that finer-grained analyses are needed to distinguish between different levels of data-efficiency.
arXiv Detail & Related papers (2023-09-19T20:30:58Z) - AD-DMKDE: Anomaly Detection through Density Matrices and Fourier
Features [0.0]
The method can be seen as an efficient approximation of Kernel Density Estimation (KDE)
A systematic comparison of the proposed method with eleven state-of-the-art anomaly detection methods on various data sets is presented.
arXiv Detail & Related papers (2022-10-26T15:43:16Z) - Fast Kernel Density Estimation with Density Matrices and Random Fourier
Features [0.0]
kernels density estimation (KDE) is one of the most widely used nonparametric density estimation methods.
DMKDE uses density matrices, a quantum mechanical formalism, and random Fourier features, an explicit kernel approximation, to produce density estimates.
DMKDE is on par with its competitors for computing density estimates and advantages are shown when performed on high-dimensional data.
arXiv Detail & Related papers (2022-08-02T02:11:10Z) - Learning Transfer Operators by Kernel Density Estimation [0.0]
We recast the problem within the framework of statistical density estimation.
We demonstrate the validity and effectiveness of this approach in estimating the eigenvectors of the Frobenius-Perron operator.
We suggest the possibility of incorporating other density estimation methods into this field.
arXiv Detail & Related papers (2022-08-01T14:28:10Z) - Quantum Adaptive Fourier Features for Neural Density Estimation [0.0]
This paper presents a method for neural density estimation that can be seen as a type of kernel density estimation.
The method is based on density matrices, a formalism used in quantum mechanics, and adaptive Fourier features.
The method was evaluated in different synthetic and real datasets, and its performance compared against state-of-the-art neural density estimation methods.
arXiv Detail & Related papers (2022-08-01T01:39:11Z) - TAKDE: Temporal Adaptive Kernel Density Estimator for Real-Time Dynamic
Density Estimation [16.45003200150227]
We name the temporal adaptive kernel density estimator (TAKDE)
TAKDE is theoretically optimal in terms of the worst-case AMISE.
We provide numerical experiments using synthetic and real-world datasets, showing that TAKDE outperforms other state-of-the-art dynamic density estimators.
arXiv Detail & Related papers (2022-03-15T23:38:32Z) - Density-Based Clustering with Kernel Diffusion [59.4179549482505]
A naive density corresponding to the indicator function of a unit $d$-dimensional Euclidean ball is commonly used in density-based clustering algorithms.
We propose a new kernel diffusion density function, which is adaptive to data of varying local distributional characteristics and smoothness.
arXiv Detail & Related papers (2021-10-11T09:00:33Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Nonparametric Density Estimation from Markov Chains [68.8204255655161]
We introduce a new nonparametric density estimator inspired by Markov Chains, and generalizing the well-known Kernel Density Estor.
Our estimator presents several benefits with respect to the usual ones and can be used straightforwardly as a foundation in all density-based algorithms.
arXiv Detail & Related papers (2020-09-08T18:33:42Z) - Learning to Optimize Non-Rigid Tracking [54.94145312763044]
We employ learnable optimizations to improve robustness and speed up solver convergence.
First, we upgrade the tracking objective by integrating an alignment data term on deep features which are learned end-to-end through CNN.
Second, we bridge the gap between the preconditioning technique and learning method by introducing a ConditionNet which is trained to generate a preconditioner.
arXiv Detail & Related papers (2020-03-27T04:40:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.