Measuring spatial uniformity with the hypersphere chord length
distribution
- URL: http://arxiv.org/abs/2004.05692v1
- Date: Sun, 12 Apr 2020 20:48:50 GMT
- Title: Measuring spatial uniformity with the hypersphere chord length
distribution
- Authors: Panagiotis Sidiropoulos
- Abstract summary: This article introduces a novel measure to assess data uniformity and detect uniform pointsets on high-dimensional Euclidean spaces.
The imposed connection between the distance distribution of uniformly selected points and the hyperspherical chord length distribution is employed to quantify uniformity.
- Score: 0.7310043452300736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data uniformity is a concept associated with several semantic data
characteristics such as lack of features, correlation and sample bias. This
article introduces a novel measure to assess data uniformity and detect uniform
pointsets on high-dimensional Euclidean spaces. Spatial uniformity measure
builds upon the isomorphism between hyperspherical chords and L2-normalised
data Euclidean distances, which is implied by the fact that, in Euclidean
spaces, L2-normalised data can be geometrically defined as points on a
hypersphere. The imposed connection between the distance distribution of
uniformly selected points and the hyperspherical chord length distribution is
employed to quantify uniformity. More specifically,, the closed-form expression
of hypersphere chord length distribution is revisited extended, before
examining a few qualitative and quantitative characteristics of this
distribution that can be rather straightforwardly linked to data uniformity.
The experimental section includes validation in four distinct setups, thus
substantiating the potential of the new uniformity measure on practical
data-science applications.
Related papers
- Empirical Density Estimation based on Spline Quasi-Interpolation with
applications to Copulas clustering modeling [0.0]
Density estimation is a fundamental technique employed in various fields to model and to understand the underlying distribution of data.
In this paper we propose the mono-variate approximation of the density using quasi-interpolation.
The presented algorithm is validated on artificial and real datasets.
arXiv Detail & Related papers (2024-02-18T11:49:38Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Shape And Structure Preserving Differential Privacy [70.08490462870144]
We show how the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
We also show how using the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
arXiv Detail & Related papers (2022-09-21T18:14:38Z) - Intrinsic dimension estimation for discrete metrics [65.5438227932088]
In this letter we introduce an algorithm to infer the intrinsic dimension (ID) of datasets embedded in discrete spaces.
We demonstrate its accuracy on benchmark datasets, and we apply it to analyze a metagenomic dataset for species fingerprinting.
This suggests that evolutive pressure acts on a low-dimensional manifold despite the high-dimensionality of sequences' space.
arXiv Detail & Related papers (2022-07-20T06:38:36Z) - Time-inhomogeneous diffusion geometry and topology [69.55228523791897]
Diffusion condensation is a time-inhomogeneous process where each step first computes and then applies a diffusion operator to the data.
We theoretically analyze the convergence and evolution of this process from geometric, spectral, and topological perspectives.
Our work gives theoretical insights into the convergence of diffusion condensation, and shows that it provides a link between topological and geometric data analysis.
arXiv Detail & Related papers (2022-03-28T16:06:17Z) - Tangent Space and Dimension Estimation with the Wasserstein Distance [10.118241139691952]
Consider a set of points sampled independently near a smooth compact submanifold of Euclidean space.
We provide mathematically rigorous bounds on the number of sample points required to estimate both the dimension and the tangent spaces of that manifold.
arXiv Detail & Related papers (2021-10-12T21:02:06Z) - Kernel distance measures for time series, random fields and other
structured data [71.61147615789537]
kdiff is a novel kernel-based measure for estimating distances between instances of structured data.
It accounts for both self and cross similarities across the instances and is defined using a lower quantile of the distance distribution.
Some theoretical results are provided for separability conditions using kdiff as a distance measure for clustering and classification problems.
arXiv Detail & Related papers (2021-09-29T22:54:17Z) - Depth-based pseudo-metrics between probability distributions [1.1470070927586016]
We propose two new pseudo-metrics between continuous probability measures based on data depth and its associated central regions.
In contrast to the Wasserstein distance, the proposed pseudo-metrics do not suffer from the curse of dimensionality.
The regions-based pseudo-metric appears to be robust w.r.t. both outliers and heavy tails.
arXiv Detail & Related papers (2021-03-23T17:33:18Z) - Geometry of Similarity Comparisons [51.552779977889045]
We show that the ordinal capacity of a space form is related to its dimension and the sign of its curvature.
More importantly, we show that the statistical behavior of the ordinal spread random variables defined on a similarity graph can be used to identify its underlying space form.
arXiv Detail & Related papers (2020-06-17T13:37:42Z) - AI Giving Back to Statistics? Discovery of the Coordinate System of
Univariate Distributions by Beta Variational Autoencoder [0.0]
The article discusses experiences of training neural networks to classify univariate empirical distributions and to represent them on the two-dimensional latent space forcing disentanglement based on the inputs of cumulative distribution functions (CDF)
The representation on the latent two-dimensional coordinate system can be seen as an additional metadata of the real-world data that disentangles important distribution characteristics, such as shape of the CDF, classification probabilities of underlying theoretical distributions and their parameters, information entropy, and skewness.
arXiv Detail & Related papers (2020-04-06T14:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.