A Wiener process perspective on local intrinsic dimension estimation methods
- URL: http://arxiv.org/abs/2406.17125v1
- Date: Mon, 24 Jun 2024 20:27:13 GMT
- Title: A Wiener process perspective on local intrinsic dimension estimation methods
- Authors: Piotr Tempczyk, Ćukasz Garncarek, Dominik Filipiak, Adam Kurpisz,
- Abstract summary: Local intrinsic (LID) estimation methods have received a lot of attention in recent years thanks to the progress in deep neural networks and generative modeling.
In this paper, we investigate the recent state-of-the-art parametric LID estimation methods from the perspective of the Wiener process.
- Score: 1.6988007266875604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Local intrinsic dimension (LID) estimation methods have received a lot of attention in recent years thanks to the progress in deep neural networks and generative modeling. In opposition to old non-parametric methods, new methods use generative models to approximate diffused dataset density and scale the methods to high-dimensional datasets like images. In this paper, we investigate the recent state-of-the-art parametric LID estimation methods from the perspective of the Wiener process. We explore how these methods behave when their assumptions are not met. We give an extended mathematical description of those methods and their error as a function of the probability density of the data.
Related papers
- You are out of context! [0.0]
New data can act as forces stretching, compressing, or twisting the geometric relationships learned by a model.
We propose a novel drift detection methodology for machine learning (ML) models based on the concept of ''deformation'' in the vector space representation of data.
arXiv Detail & Related papers (2024-11-04T10:17:43Z) - Latent Anomaly Detection Through Density Matrices [3.843839245375552]
This paper introduces a novel anomaly detection framework that combines the robust statistical principles of density-estimation-based anomaly detection methods with the representation-learning capabilities of deep learning models.
The method originated from this framework is presented in two different versions: a shallow approach and a deep approach that integrates an autoencoder to learn a low-dimensional representation of the data.
arXiv Detail & Related papers (2024-08-14T15:44:51Z) - A Heat Diffusion Perspective on Geodesic Preserving Dimensionality
Reduction [66.21060114843202]
We propose a more general heat kernel based manifold embedding method that we call heat geodesic embeddings.
Results show that our method outperforms existing state of the art in preserving ground truth manifold distances.
We also showcase our method on single cell RNA-sequencing datasets with both continuum and cluster structure.
arXiv Detail & Related papers (2023-05-30T13:58:50Z) - Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks [11.937283219047984]
Several approximate inference methods have been proposed for deep discrete latent variable models.
We propose a non-parametric iterative algorithm for learning discrete latent representations in such deep models.
We evaluate our method across datasets with varying characteristics and compare our results to current amortized approximate inference methods.
arXiv Detail & Related papers (2023-03-14T20:50:12Z) - LEAN-DMKDE: Quantum Latent Density Estimation for Anomaly Detection [0.0]
The method combines an autoencoder, for learning a low-dimensional representation of the data, with a density-estimation model.
The method predicts a degree of normality for new samples based on the estimated density.
The experimental results show that the method performs on par with or outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2022-11-15T21:51:42Z) - LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood [10.35315334180936]
We propose a novel approach to the problem: Local Intrinsic Dimension estimation using approximate Likelihood (LIDL)
Our method relies on an arbitrary density estimation method as its subroutine and hence tries to sidestep the dimensionality challenge.
We show that LIDL yields competitive results on the standard benchmarks for this problem and that it scales to thousands of dimensions.
arXiv Detail & Related papers (2022-06-29T19:47:46Z) - Manifold Hypothesis in Data Analysis: Double Geometrically-Probabilistic
Approach to Manifold Dimension Estimation [92.81218653234669]
We present new approach to manifold hypothesis checking and underlying manifold dimension estimation.
Our geometrical method is a modification for sparse data of a well-known box-counting algorithm for Minkowski dimension calculation.
Experiments on real datasets show that the suggested approach based on two methods combination is powerful and effective.
arXiv Detail & Related papers (2021-07-08T15:35:54Z) - On Contrastive Representations of Stochastic Processes [53.21653429290478]
Learning representations of processes is an emerging problem in machine learning.
We show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes.
arXiv Detail & Related papers (2021-06-18T11:00:24Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.