On Linear Interpolation in the Latent Space of Deep Generative Models
- URL: http://arxiv.org/abs/2105.03663v1
- Date: Sat, 8 May 2021 10:27:07 GMT
- Title: On Linear Interpolation in the Latent Space of Deep Generative Models
- Authors: Mike Yan Michelis and Quentin Becker
- Abstract summary: Smoothness and plausibility of linears in latent space are associated with the quality of the underlying generative model.
We show that not all such curves are comparable as they can deviate arbitrarily from the shortest curve given by the geodesic.
This deviation is revealed by computing curve lengths with the pull-back metric of the generative model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The underlying geometrical structure of the latent space in deep generative
models is in most cases not Euclidean, which may lead to biases when comparing
interpolation capabilities of two models. Smoothness and plausibility of linear
interpolations in latent space are associated with the quality of the
underlying generative model. In this paper, we show that not all such
interpolations are comparable as they can deviate arbitrarily from the shortest
interpolation curve given by the geodesic. This deviation is revealed by
computing curve lengths with the pull-back metric of the generative model,
finding shorter curves than the straight line between endpoints, and measuring
a non-zero relative length improvement on this straight line. This leads to a
strategy to compare linear interpolations across two generative models. We also
show the effect and importance of choosing an appropriate output space for
computing shorter curves. For this computation we derive an extension of the
pull-back metric.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Manifold-augmented Eikonal Equations: Geodesic Distances and Flows on
Differentiable Manifolds [5.0401589279256065]
We show how the geometry of a manifold impacts the distance field, and exploit the geodesic flow to obtain globally length-minimising curves directly.
This work opens opportunities for statistics and reduced-order modelling on differentiable manifold.
arXiv Detail & Related papers (2023-10-09T21:11:13Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - Short and Straight: Geodesics on Differentiable Manifolds [6.85316573653194]
In this work, we first analyse existing methods for computing length-minimising geodesics.
Second, we propose a model-based parameterisation for distance fields and geodesic flows on continuous manifold.
Third, we develop a curvature-based training mechanism, sampling and scaling points in regions of the manifold exhibiting larger values of the Ricci scalar.
arXiv Detail & Related papers (2023-05-24T15:09:41Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Linear Interpolation In Parameter Space is Good Enough for Fine-Tuned
Language Models [0.21485350418225244]
We explore linear connectivity between parameters of pre-trained models after fine-tuning.
Surprisingly, we could perform linear inference without a performance drop in intermediate points for fine-tuned models.
For controllable text generation, such inference could be seen as moving a model towards or against the desired text.
arXiv Detail & Related papers (2022-11-22T08:49:22Z) - Geodesic Models with Convexity Shape Prior [8.932981695464761]
In this paper, we take into account a more complicated problem: finding curvature-penalized geodesic paths with a convexity shape prior.
We establish new geodesic models relying on the strategy of orientation-lifting.
The convexity shape prior serves as a constraint for the construction of local geodesic metrics encoding a curvature constraint.
arXiv Detail & Related papers (2021-11-01T09:41:54Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - A Differential Geometry Perspective on Orthogonal Recurrent Models [56.09491978954866]
We employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.
We show that orthogonal RNNs may be viewed as optimizing in the space of divergence-free vector fields.
Motivated by this observation, we study a new recurrent model, which spans the entire space of vector fields.
arXiv Detail & Related papers (2021-02-18T19:39:22Z) - Feature-Based Interpolation and Geodesics in the Latent Spaces of
Generative Models [10.212371817325065]
Interpolating between points is a problem connected simultaneously with finding geodesics and study of generative models.
We provide examples which simultaneously allow us to search for geodesics and interpolating curves in latent space in the case of arbitrary density.
arXiv Detail & Related papers (2019-04-06T13:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.