Representing Deep Neural Networks Latent Space Geometries with Graphs
- URL: http://arxiv.org/abs/2011.07343v1
- Date: Sat, 14 Nov 2020 17:21:29 GMT
- Title: Representing Deep Neural Networks Latent Space Geometries with Graphs
- Authors: Carlos Lassance, Vincent Gripon, Antonio Ortega
- Abstract summary: Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks.
In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems.
- Score: 38.63434325489782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL) has attracted a lot of attention for its ability to reach
state-of-the-art performance in many machine learning tasks. The core principle
of DL methods consists in training composite architectures in an end-to-end
fashion, where inputs are associated with outputs trained to optimize an
objective function. Because of their compositional nature, DL architectures
naturally exhibit several intermediate representations of the inputs, which
belong to so-called latent spaces. When treated individually, these
intermediate representations are most of the time unconstrained during the
learning process, as it is unclear which properties should be favored. However,
when processing a batch of inputs concurrently, the corresponding set of
intermediate representations exhibit relations (what we call a geometry) on
which desired properties can be sought. In this work, we show that it is
possible to introduce constraints on these latent geometries to address various
problems. In more details, we propose to represent geometries by constructing
similarity graphs from the intermediate representations obtained when
processing a batch of inputs. By constraining these Latent Geometry Graphs
(LGGs), we address the three following problems: i) Reproducing the behavior of
a teacher architecture is achieved by mimicking its geometry, ii) Designing
efficient embeddings for classification is achieved by targeting specific
geometries, and iii) Robustness to deviations on inputs is achieved via
enforcing smooth variation of geometry between consecutive latent spaces. Using
standard vision benchmarks, we demonstrate the ability of the proposed
geometry-based methods in solving the considered problems.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Current Symmetry Group Equivariant Convolution Frameworks for Representation Learning [5.802794302956837]
Euclidean deep learning is often inadequate for addressing real-world signals where the representation space is irregular and curved with complex topologies.
We focus on the importance of symmetry group equivariant deep learning models and their realization of convolution-like operations on graphs, 3D shapes, and non-Euclidean spaces.
arXiv Detail & Related papers (2024-09-11T15:07:18Z) - Str-L Pose: Integrating Point and Structured Line for Relative Pose Estimation in Dual-Graph [45.115555973941255]
Relative pose estimation is crucial for various computer vision applications, including Robotic and Autonomous Driving.
We propose a Geometric Correspondence Graph neural network that integrates point features with extra structured line segments.
This integration of matched points and line segments further exploits the geometry constraints and enhances model performance across different environments.
arXiv Detail & Related papers (2024-08-28T12:33:26Z) - InfoNorm: Mutual Information Shaping of Normals for Sparse-View Reconstruction [15.900375207144759]
3D surface reconstruction from multi-view images is essential for scene understanding and interaction.
Recent implicit surface representations, such as Neural Radiance Fields (NeRFs) and signed distance functions (SDFs) employ various geometric priors to resolve the lack of observed information.
We propose regularizing the geometric modeling by explicitly encouraging the mutual information among surface normals of highly correlated scene points.
arXiv Detail & Related papers (2024-07-17T15:46:25Z) - Grounding Continuous Representations in Geometry: Equivariant Neural Fields [26.567143650213225]
We propose a novel CNF architecture which uses a geometry-informed cross-attention to condition the NeF on a geometric variable.
We show that this approach induces a steerability property by which both field and latent are grounded in geometry.
We validate these main properties in a range of tasks including classification, segmentation, forecasting and reconstruction.
arXiv Detail & Related papers (2024-06-09T12:16:30Z) - Human as Points: Explicit Point-based 3D Human Reconstruction from
Single-view RGB Images [78.56114271538061]
We introduce an explicit point-based human reconstruction framework called HaP.
Our approach is featured by fully-explicit point cloud estimation, manipulation, generation, and refinement in the 3D geometric space.
Our results may indicate a paradigm rollback to the fully-explicit and geometry-centric algorithm design.
arXiv Detail & Related papers (2023-11-06T05:52:29Z) - Self-Supervised Image Representation Learning with Geometric Set
Consistency [50.12720780102395]
We propose a method for self-supervised image representation learning under the guidance of 3D geometric consistency.
Specifically, we introduce 3D geometric consistency into a contrastive learning framework to enforce the feature consistency within image views.
arXiv Detail & Related papers (2022-03-29T08:57:33Z) - Hermitian Symmetric Spaces for Graph Embeddings [0.0]
We learn continuous representations of graphs in spaces of symmetric matrices over C.
These spaces offer a rich geometry that simultaneously admits hyperbolic and Euclidean subspaces.
The proposed models are able to automatically adapt to very dissimilar arrangements without any apriori estimates of graph features.
arXiv Detail & Related papers (2021-05-11T18:14:52Z) - Self-supervised Geometric Perception [96.89966337518854]
Self-supervised geometric perception is a framework to learn a feature descriptor for correspondence matching without any ground-truth geometric model labels.
We show that SGP achieves state-of-the-art performance that is on-par or superior to the supervised oracles trained using ground-truth labels.
arXiv Detail & Related papers (2021-03-04T15:34:43Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.