Metric Convolutions: A Unifying Theory to Adaptive Convolutions
- URL: http://arxiv.org/abs/2406.05400v1
- Date: Sat, 8 Jun 2024 08:41:12 GMT
- Title: Metric Convolutions: A Unifying Theory to Adaptive Convolutions
- Authors: Thomas Dagès, Michael Lindenbaum, Alfred M. Bruckstein,
- Abstract summary: Metric convolutions replace standard convolutions in image processing and deep learning.
They require fewer parameters and provide better generalisation.
Our approach shows competitive performance in standard denoising and classification tasks.
- Score: 3.481985817302898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard convolutions are prevalent in image processing and deep learning, but their fixed kernel design limits adaptability. Several deformation strategies of the reference kernel grid have been proposed. Yet, they lack a unified theoretical framework. By returning to a metric perspective for images, now seen as two-dimensional manifolds equipped with notions of local and geodesic distances, either symmetric (Riemannian metrics) or not (Finsler metrics), we provide a unifying principle: the kernel positions are samples of unit balls of implicit metrics. With this new perspective, we also propose metric convolutions, a novel approach that samples unit balls from explicit signal-dependent metrics, providing interpretable operators with geometric regularisation. This framework, compatible with gradient-based optimisation, can directly replace existing convolutions applied to either input images or deep features of neural networks. Metric convolutions typically require fewer parameters and provide better generalisation. Our approach shows competitive performance in standard denoising and classification tasks.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - A Canonicalization Perspective on Invariant and Equivariant Learning [54.44572887716977]
We introduce a canonicalization perspective that provides an essential and complete view of the design of frames.
We show that there exists an inherent connection between frames and canonical forms.
We design novel frames for eigenvectors that are strictly superior to existing methods.
arXiv Detail & Related papers (2024-05-28T17:22:15Z) - Soft Matching Distance: A metric on neural representations that captures
single-neuron tuning [6.5714523708869566]
Common measures of neural representational (dis)similarity are designed to be insensitive to rotations and reflections of the neural activation space.
We propose a new metric to measure distances between networks with different sizes.
arXiv Detail & Related papers (2023-11-16T00:13:00Z) - Scalable Stochastic Gradient Riemannian Langevin Dynamics in Non-Diagonal Metrics [3.8811062755861956]
We propose two non-diagonal metrics that can be used in-gradient samplers to improve convergence and exploration.
We show that for fully connected neural networks (NNs) with sparsity-inducing priors and convolutional NNs with correlated priors, using these metrics can provide improvements.
arXiv Detail & Related papers (2023-03-09T08:20:28Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Learning to Discover Reflection Symmetry via Polar Matching Convolution [33.77926792753373]
We introduce a new convolutional technique, dubbed the polar matching convolution, which leverages a polar feature pooling, a self-similarity encoding, and a kernel design for axes of different angles.
The proposed high-dimensional kernel convolution network effectively learns to discover symmetry patterns from real-world images.
Experiments demonstrate that our method outperforms state-of-the-art methods in terms of accuracy and robustness.
arXiv Detail & Related papers (2021-08-30T01:50:51Z) - Coordinate Independent Convolutional Networks -- Isometry and Gauge
Equivariant Convolutions on Riemannian Manifolds [70.32518963244466]
A major complication in comparison to flat spaces is that it is unclear in which alignment a convolution kernel should be applied on a manifold.
We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent.
A simultaneous demand for coordinate independence and weight sharing is shown to result in a requirement on the network to be equivariant.
arXiv Detail & Related papers (2021-06-10T19:54:19Z) - Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs [81.12344211998635]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
arXiv Detail & Related papers (2020-03-11T17:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.