Learning Differential Invariants of Planar Curves
- URL: http://arxiv.org/abs/2303.03458v1
- Date: Mon, 6 Mar 2023 19:30:43 GMT
- Title: Learning Differential Invariants of Planar Curves
- Authors: Roy Velich and Ron Kimmel
- Abstract summary: We propose a learning paradigm for the numerical approximation of differential invariants of planar curves.
Deep neural-networks' (DNNs) universal approximation properties are utilized to estimate geometric measures.
- Score: 12.699486382844393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a learning paradigm for the numerical approximation of
differential invariants of planar curves. Deep neural-networks' (DNNs)
universal approximation properties are utilized to estimate geometric measures.
The proposed framework is shown to be a preferable alternative to axiomatic
constructions. Specifically, we show that DNNs can learn to overcome
instabilities and sampling artifacts and produce consistent signatures for
curves subject to a given group of transformations in the plane. We compare the
proposed schemes to alternative state-of-the-art axiomatic constructions of
differential invariants. We evaluate our models qualitatively and
quantitatively and propose a benchmark dataset to evaluate approximation models
of differential invariants of planar curves.
Related papers
- Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Nonparametric Automatic Differentiation Variational Inference with
Spline Approximation [7.5620760132717795]
We develop a nonparametric approximation approach that enables flexible posterior approximation for distributions with complicated structures.
Compared with widely-used nonparametrical inference methods, the proposed method is easy to implement and adaptive to various data structures.
Experiments demonstrate the efficiency of the proposed method in approximating complex posterior distributions and improving the performance of generative models with incomplete data.
arXiv Detail & Related papers (2024-03-10T20:22:06Z) - Natural Evolution Strategies as a Black Box Estimator for Stochastic
Variational Inference [0.0]
VAE allows unbiased and low variance estimation, restricting the types of models that can be created.
An alternative gradient estimator based on natural evolution strategies is proposed.
This estimator does not make assumptions about the kind of distributions used, allowing for the creation of models that would otherwise not have been possible under the VAE framework.
arXiv Detail & Related papers (2023-08-15T21:43:11Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Recursive Monte Carlo and Variational Inference with Auxiliary Variables [64.25762042361839]
Recursive auxiliary-variable inference (RAVI) is a new framework for exploiting flexible proposals.
RAVI generalizes and unifies several existing methods for inference with expressive expressive families.
We show RAVI's design framework and theorems by using them to analyze and improve upon Salimans et al.'s Markov Chain Variational Inference.
arXiv Detail & Related papers (2022-03-05T23:52:40Z) - Deep Signatures -- Learning Invariants of Planar Curves [12.699486382844393]
We propose a learning paradigm for numerical approximation of differential invariants of planar curves.
Deep neural-networks' (DNNs) universal approximation properties are utilized to estimate geometric measures.
arXiv Detail & Related papers (2022-02-11T22:34:15Z) - Geometric variational inference [0.0]
Variational Inference (VI) or Markov-Chain Monte-Carlo (MCMC) techniques are used to go beyond point estimates.
This work proposes geometric Variational Inference (geoVI), a method based on Riemannian geometry and the Fisher information metric.
The distribution, expressed in the coordinate system induced by the transformation, takes a particularly simple form that allows for an accurate variational approximation.
arXiv Detail & Related papers (2021-05-21T17:18:50Z) - A Differential Geometry Perspective on Orthogonal Recurrent Models [56.09491978954866]
We employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.
We show that orthogonal RNNs may be viewed as optimizing in the space of divergence-free vector fields.
Motivated by this observation, we study a new recurrent model, which spans the entire space of vector fields.
arXiv Detail & Related papers (2021-02-18T19:39:22Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.