Geodesics in fibered latent spaces: A geometric approach to learning
correspondences between conditions
- URL: http://arxiv.org/abs/2005.07852v3
- Date: Sun, 27 Dec 2020 11:46:48 GMT
- Title: Geodesics in fibered latent spaces: A geometric approach to learning
correspondences between conditions
- Authors: Tariq Daouda, Reda Chhaibi, Prudencio Tossou, Alexandra-Chlo\'e
Villani
- Abstract summary: This work introduces a geometric framework and a novel network architecture for creating correspondences between samples of different conditions.
Under this formalism, the latent space is a fiber bundle stratified into a base space encoding conditions, and a fiber space encoding the variations within conditions.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces a geometric framework and a novel network architecture
for creating correspondences between samples of different conditions. Under
this formalism, the latent space is a fiber bundle stratified into a base space
encoding conditions, and a fiber space encoding the variations within
conditions. Furthermore, this latent space is endowed with a natural pull-back
metric. The correspondences between conditions are obtained by minimizing an
energy functional, resulting in diffeomorphism flows between fibers.
We illustrate this approach using MNIST and Olivetti and benchmark its
performances on the task of batch correction, which is the problem of
integrating multiple biological datasets together.
Related papers
- GMapLatent: Geometric Mapping in Latent Space [51.317738404571514]
Cross-domain generative models based on encoder-decoder AI architectures have attracted much attention in generating realistic images.
We introduce a canonical latent space representation based on geometric mapping to align the cross-domain latent spaces in a rigorous and precise manner.
Experiments on gray-scale and color images validate the efficiency, efficacy and applicability of GMapLatent.
arXiv Detail & Related papers (2025-03-30T12:02:36Z) - Technical Report: Aggregation on Learnable Manifolds for Asynchronous Federated Optimization [0.0]
In Federated Learning (FL), a primary challenge to the server-side aggregation of client models is device heterogeneity in both loss landscape geometry and computational capacity.
We propose AsyncManifold, a novel asynchronous FL framework to address these issues by taking advantage of underlying solution space geometry at each of the local training, delay-correction, and aggregation stages.
Our proposal is accompanied by a convergence proof in a general form and, motivated through exploratory studies of local behaviour, a proof-of-concept which performs aggregation along non-linear mode connections.
arXiv Detail & Related papers (2025-03-18T16:36:59Z) - ARC-Flow : Articulated, Resolution-Agnostic, Correspondence-Free Matching and Interpolation of 3D Shapes Under Flow Fields [4.706075725469252]
This work presents a unified framework for the unsupervised prediction of physically plausibles between two 3D articulated shapes.
Interpolation is modelled as a diffeomorphic transformation using a smooth, time-varying flow field governed by Neural Ordinary Differential Equations (ODEs)
Correspondence is recovered using an efficient Varifold formulation, that is effective on high-fidelity surfaces with differing parameterisations.
arXiv Detail & Related papers (2025-03-04T13:28:05Z) - Modeling All Response Surfaces in One for Conditional Search Spaces [69.90317997694218]
This paper proposes a novel approach to model the response surfaces of all subspaces in one.
We introduce an attention-based deep feature extractor, capable of projecting configurations with different structures from various subspaces into a unified feature space.
arXiv Detail & Related papers (2025-01-08T03:56:06Z) - Approximate Fiber Product: A Preliminary Algebraic-Geometric Perspective on Multimodal Embedding Alignment [1.3824176915623292]
Multimodal tasks, such as image-text retrieval and generation, require embedding data from diverse modalities into a shared representation space.
This paper provides an initial attempt to integrate algebra into multimodal representation learning.
arXiv Detail & Related papers (2024-11-30T06:45:13Z) - Thinner Latent Spaces: Detecting dimension and imposing invariance through autoencoder gradient constraints [9.380902608139902]
We show that orthogonality relations within the latent layer of the network can be leveraged to infer the intrinsic dimensionality of nonlinear manifold data sets.
We outline the relevant theory relying on differential geometry, and describe the corresponding gradient-descent optimization algorithm.
arXiv Detail & Related papers (2024-08-28T20:56:35Z) - Grounding Continuous Representations in Geometry: Equivariant Neural Fields [26.567143650213225]
We propose a novel CNF architecture which uses a geometry-informed cross-attention to condition the NeF on a geometric variable.
We show that this approach induces a steerability property by which both field and latent are grounded in geometry.
We validate these main properties in a range of tasks including classification, segmentation, forecasting and reconstruction.
arXiv Detail & Related papers (2024-06-09T12:16:30Z) - IME: Integrating Multi-curvature Shared and Specific Embedding for Temporal Knowledge Graph Completion [97.58125811599383]
Temporal Knowledge Graphs (TKGs) incorporate a temporal dimension, allowing for a precise capture of the evolution of knowledge.
We propose a novel Multi-curvature shared and specific Embedding (IME) model for TKGC tasks.
IME incorporates two key properties, namely space-shared property and space-specific property.
arXiv Detail & Related papers (2024-03-28T23:31:25Z) - Neural Latent Geometry Search: Product Manifold Inference via
Gromov-Hausdorff-Informed Bayesian Optimization [21.97865037637575]
We mathematically define this novel formulation and coin it as neural latent geometry search (NLGS)
We propose a novel notion of distance between candidate latent geometries based on the Gromov-Hausdorff distance from metric geometry.
We then design a graph search space based on the notion of smoothness between latent geometries and employ the calculated as an additional inductive bias.
arXiv Detail & Related papers (2023-09-09T14:29:22Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Few Shot Generative Model Adaption via Relaxed Spatial Structural
Alignment [130.84010267004803]
Training a generative adversarial network (GAN) with limited data has been a challenging task.
A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.
We propose a relaxed spatial structural alignment method to calibrate the target generative models during the adaption.
arXiv Detail & Related papers (2022-03-06T14:26:25Z) - Deep Networks on Toroids: Removing Symmetries Reveals the Structure of
Flat Regions in the Landscape Geometry [3.712728573432119]
We develop a standardized parameterization in which all symmetries are removed, resulting in a toroidal topology.
We derive a meaningful notion of the flatness of minimizers and of the geodesic paths connecting them.
We also find that minimizers found by variants of gradient descent can be connected by zero-error paths with a single bend.
arXiv Detail & Related papers (2022-02-07T09:57:54Z) - A Unifying and Canonical Description of Measure-Preserving Diffusions [60.59592461429012]
A complete recipe of measure-preserving diffusions in Euclidean space was recently derived unifying several MCMC algorithms into a single framework.
We develop a geometric theory that improves and generalises this construction to any manifold.
arXiv Detail & Related papers (2021-05-06T17:36:55Z) - Foundations of Population-Based SHM, Part IV: The Geometry of Spaces of
Structures and their Feature Spaces [0.0]
This paper will discuss the various geometrical structures required for an abstract theory of feature spaces in Structural Health Monitoring.
In the second part of the paper, the problem of determining the normal condition cross section of a feature bundle is addressed.
The solution is provided by the application of Graph Neural Networks (GNN), a versatile non-Euclidean machine learning algorithm.
arXiv Detail & Related papers (2021-03-05T13:28:51Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.