Fubini Study geometry of representation drift in high dimensional data
- URL: http://arxiv.org/abs/2602.02596v1
- Date: Sun, 01 Feb 2026 16:00:59 GMT
- Title: Fubini Study geometry of representation drift in high dimensional data
- Authors: Arturo Tozzi,
- Abstract summary: High dimensional representation drift is commonly quantified using Euclidean or cosine distances.<n>We introduce a projective geometric view of representation drift grounded in the Fubini Study metric.<n>We show that the Fubini Study metric isolates intrinsic evolution by remaining invariant under gauge-induced fluctuations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High dimensional representation drift is commonly quantified using Euclidean or cosine distances, which presuppose fixed coordinates when comparing representations across time, training or preprocessing stages. While effective in many settings, these measures entangle intrinsic changes in the data with variations induced by arbitrary parametrizations. We introduce a projective geometric view of representation drift grounded in the Fubini Study metric, which identifies representations that differ only by gauge transformations such as global rescalings or sign flips. Applying this framework to empirical high dimensional datasets, we explicitly construct representation trajectories and track their evolution through cumulative geometric drift. Comparing Euclidean, cosine and Fubini Study distances along these trajectories reveals that conventional metrics systematically overestimate change whenever representations carry genuine projective ambiguity. By contrast, the Fubini Study metric isolates intrinsic evolution by remaining invariant under gauge-induced fluctuations. We further show that the difference between cosine and Fubini Study drift defines a computable, monotone quantity that directly captures representation churn attributable to gauge freedom. This separation provides a diagnostic for distinguishing meaningful structural evolution from parametrization artifacts, without introducing model-specific assumptions. Overall, we establish a geometric criterion for assessing representation stability in high-dimensional systems and clarify the limits of angular distances. Embedding representation dynamics in projective space connects data analysis with established geometric programs and yields observables that are directly testable in empirical workflows.
Related papers
- GeodesicNVS: Probability Density Geodesic Flow Matching for Novel View Synthesis [54.39598154430305]
We propose a Data-to-Data Flow Matching framework that learns deterministic transformations directly between paired views.<n>PDG-FM constrains flow trajectories using geodesic interpolants derived from probability density metrics of pretrained diffusion models.<n>These results highlight the advantages of incorporating data-dependent geometric regularization into deterministic flow matching for consistent novel view generation.
arXiv Detail & Related papers (2026-03-01T09:30:11Z) - Generative Modeling of Discrete Data Using Geometric Latent Subspaces [3.7015295923035705]
We introduce the use of latent subspaces in the exponential parameter space of product manifold of categorial distributions.<n>The low-dimensional latent space encodes statistical dependencies and removes redundant degrees of freedom among the variables.<n>We show that reduced latent dimensions suffice to represent data for generative modeling.
arXiv Detail & Related papers (2026-01-29T15:14:15Z) - Scale-Consistent State-Space Dynamics via Fractal of Stationary Transformations [9.983526161001997]
Recent deep learning models increasingly rely on depth without structural guarantees on the validity of intermediate representations.<n>We address this limitation by formulating a structural requirement for state-space model's scale-consistent latent dynamics.<n>We empirically verify the predicted scale-consistent behavior, showing that adaptive efficiency emerges from the aligned latent geometry.
arXiv Detail & Related papers (2026-01-27T12:44:20Z) - ARGUS: Adaptive Rotation-Invariant Geometric Unsupervised System [0.0]
This paper introduces Argus, a framework that reconceptualizes drift detection as tracking local statistics over a fixed spatial partition of the data manifold.<n> Voronoi tessellations over canonical orthonormal frames yield drift metrics that are invariant to transformations.<n>A graph-theoretic characterization of drift propagation is developed that distinguishes coherent distributional shifts from isolated perturbations.
arXiv Detail & Related papers (2026-01-03T22:39:20Z) - VIKING: Deep variational inference with stochastic projections [48.946143517489496]
Variational mean field approximations tend to struggle with contemporary overparametrized deep neural networks.<n>We propose a simple variational family that considers two independent linear subspaces of the parameter space.<n>This allows us to build a fully-correlated approximate posterior reflecting the overparametrization.
arXiv Detail & Related papers (2025-10-27T15:38:35Z) - What's Inside Your Diffusion Model? A Score-Based Riemannian Metric to Explore the Data Manifold [0.053713376045563095]
We introduce a score-based Riemannian metric to characterize the intrinsic geometry of a data manifold.<n>Our approach creates a geometry where geodesics naturally follow the manifold's contours.<n>We show that our score-based geodesics capture meaningful perpendicular transformations that respect the underlying data distribution.
arXiv Detail & Related papers (2025-05-16T11:19:57Z) - Measuring Orthogonality in Representations of Generative Models [81.13466637365553]
In unsupervised representation learning, models aim to distill essential features from high-dimensional data into lower-dimensional learned representations.
Disentanglement of independent generative processes has long been credited with producing high-quality representations.
We propose two novel metrics: Importance-Weighted Orthogonality (IWO) and Importance-Weighted Rank (IWR)
arXiv Detail & Related papers (2024-07-04T08:21:54Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Geometric Scattering on Measure Spaces [15.819230791757906]
We introduce a general, unified model for geometric scattering on measure spaces.<n>We consider finite measure spaces that are obtained from randomly sampling an unknown manifold.<n>We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold.
arXiv Detail & Related papers (2022-08-17T22:40:09Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.