FineMorphs: Affine-diffeomorphic sequences for regression
- URL: http://arxiv.org/abs/2305.17255v1
- Date: Fri, 26 May 2023 20:54:18 GMT
- Title: FineMorphs: Affine-diffeomorphic sequences for regression
- Authors: Michele Lohr, Laurent Younes
- Abstract summary: The model states are optimally "reshaped" by diffeomorphisms generated by smooth vector fields during learning.
Affine transformations and vector fields are optimized within an optimal control setting.
The model can naturally reduce (or increase) dimensionality and adapt to large datasets via suboptimal vector fields.
- Score: 1.1421942894219896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A multivariate regression model of affine and diffeomorphic transformation
sequences - FineMorphs - is presented. Leveraging concepts from shape analysis,
model states are optimally "reshaped" by diffeomorphisms generated by smooth
vector fields during learning. Affine transformations and vector fields are
optimized within an optimal control setting, and the model can naturally reduce
(or increase) dimensionality and adapt to large datasets via suboptimal vector
fields. An existence proof of solution and necessary conditions for optimality
for the model are derived. Experimental results on real datasets from the UCI
repository are presented, with favorable results in comparison with
state-of-the-art in the literature and densely-connected neural networks in
TensorFlow.
Related papers
- Optimal Matrix-Mimetic Tensor Algebras via Variable Projection [0.0]
Matrix mimeticity arises from interpreting tensors as operators that can be multiplied, factorized, and analyzed analogous to matrices.
We learn optimal linear mappings and corresponding tensor representations without relying on prior knowledge of the data.
We provide original theory of uniqueness of the transformation and convergence analysis of our variable-projection-based algorithm.
arXiv Detail & Related papers (2024-06-11T04:52:23Z) - NIVeL: Neural Implicit Vector Layers for Text-to-Vector Generation [27.22029199085009]
NIVeL reinterprets the problem on an alternative, intermediate domain which preserves the desirable properties of vector graphics.
Based on our experiments, NIVeL produces text-to-vector graphics results of significantly better quality than the state-of-the-art.
arXiv Detail & Related papers (2024-05-24T05:15:45Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Using Shape Constraints for Improving Symbolic Regression Models [0.0]
We describe and analyze algorithms for shape-constrained symbolic regression.
We use a set of models from physics textbooks to test the algorithms.
The results show that all algorithms are able to find models which conform to all shape constraints.
arXiv Detail & Related papers (2021-07-20T12:53:28Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Learning Convex Optimization Models [0.5524804393257919]
A convex optimization model predicts an output from an input by solving a convex optimization problem.
We propose methods for learning the parameters in a convex optimization model given a dataset of input-output pairs.
arXiv Detail & Related papers (2020-06-07T20:01:51Z) - Analysis of Bayesian Inference Algorithms by the Dynamical Functional
Approach [2.8021833233819486]
We analyze an algorithm for approximate inference with large Gaussian latent variable models in a student-trivial scenario.
For the case of perfect data-model matching, the knowledge of static order parameters derived from the replica method allows us to obtain efficient algorithmic updates.
arXiv Detail & Related papers (2020-01-14T17:22:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.