Topological Parallax: A Geometric Specification for Deep Perception
Models
- URL: http://arxiv.org/abs/2306.11835v2
- Date: Fri, 27 Oct 2023 16:06:07 GMT
- Title: Topological Parallax: A Geometric Specification for Deep Perception
Models
- Authors: Abraham D. Smith, Michael J. Catanzaro, Gabrielle Angeloro, Nirav
Patel, Paul Bendich
- Abstract summary: We introduce topological parallax as a theoretical and computational tool that compares a trained model to a reference dataset.
Our examples show that this geometric similarity between dataset and model is essential to trustworthy and perturbation.
This new concept will add value to the current debate regarding the unclear relationship between overfitting and generalization in applications of deep-learning.
- Score: 0.778001492222129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For safety and robustness of AI systems, we introduce topological parallax as
a theoretical and computational tool that compares a trained model to a
reference dataset to determine whether they have similar multiscale geometric
structure. Our proofs and examples show that this geometric similarity between
dataset and model is essential to trustworthy interpolation and perturbation,
and we conjecture that this new concept will add value to the current debate
regarding the unclear relationship between overfitting and generalization in
applications of deep-learning. In typical DNN applications, an explicit
geometric description of the model is impossible, but parallax can estimate
topological features (components, cycles, voids, etc.) in the model by
examining the effect on the Rips complex of geodesic distortions using the
reference dataset. Thus, parallax indicates whether the model shares similar
multiscale geometric features with the dataset. Parallax presents theoretically
via topological data analysis [TDA] as a bi-filtered persistence module, and
the key properties of this module are stable under perturbation of the
reference dataset.
Related papers
- Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - (Deep) Generative Geodesics [57.635187092922976]
We introduce a newian metric to assess the similarity between any two data points.
Our metric leads to the conceptual definition of generative distances and generative geodesics.
Their approximations are proven to converge to their true values under mild conditions.
arXiv Detail & Related papers (2024-07-15T21:14:02Z) - GeoBench: Benchmarking and Analyzing Monocular Geometry Estimation Models [41.76935689355034]
Discriminative and generative pretraining have yielded geometry estimation models with strong generalization capabilities.
We build fair and strong baselines for evaluating and analyzing the geometry estimation models.
We evaluate monocular geometry estimators on more challenging benchmarks for geometry estimation task with diverse scenes and high-quality annotations.
arXiv Detail & Related papers (2024-06-18T14:44:12Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Geometric and Topological Inference for Deep Representations of Complex
Networks [13.173307471333619]
We present a class of statistics that emphasize the topology as well as the geometry of representations.
We evaluate these statistics in terms of the sensitivity and specificity that they afford when used for model selection.
These new methods enable brain and computer scientists to visualize the dynamic representational transformations learned by brains and models.
arXiv Detail & Related papers (2022-03-10T17:14:14Z) - Gaussian Determinantal Processes: a new model for directionality in data [10.591948377239921]
In this work, we investigate a parametric family of Gaussian DPPs with a clearly interpretable effect of parametric modulation on the observed points.
We show that parameter modulation impacts the observed points by introducing directionality in their repulsion structure, and the principal directions correspond to the directions of maximal dependency.
This model readily yields a novel and viable alternative to Principal Component Analysis (PCA) as a dimension reduction tool that favors directions along which the data is most spread out.
arXiv Detail & Related papers (2021-11-19T00:57:33Z) - Learning Linear Polytree Structural Equation Models [4.833417613564028]
We are interested in the problem of learning the directed acyclic graph (DAG) when data are generated from a linear structural equation model (SEM)
We study sufficient conditions on the sample sizes for the well-known Chow-Liu algorithm to exactly recover both the skeleton and the equivalence class of the polytree.
We also consider an extension of group linear polytree models, in which each node represents a group of variables.
arXiv Detail & Related papers (2021-07-22T23:22:20Z) - Predicting Multidimensional Data via Tensor Learning [0.0]
We develop a model that retains the intrinsic multidimensional structure of the dataset.
To estimate the model parameters, an Alternating Least Squares algorithm is developed.
The proposed model is able to outperform benchmark models present in the forecasting literature.
arXiv Detail & Related papers (2020-02-11T11:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.