Equivariant neural networks for inverse problems
- URL: http://arxiv.org/abs/2102.11504v1
- Date: Tue, 23 Feb 2021 05:38:41 GMT
- Title: Equivariant neural networks for inverse problems
- Authors: Elena Celledoni, Matthias J. Ehrhardt, Christian Etmann, Brynjulf
Owren, Carola-Bibiane Sch\"onlieb and Ferdia Sherry
- Abstract summary: We show that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods.
We design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks.
- Score: 1.7942265700058986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years the use of convolutional layers to encode an inductive bias
(translational equivariance) in neural networks has proven to be a very
fruitful idea. The successes of this approach have motivated a line of research
into incorporating other symmetries into deep learning methods, in the form of
group equivariant convolutional neural networks. Much of this work has been
focused on roto-translational symmetry of $\mathbf R^d$, but other examples are
the scaling symmetry of $\mathbf R^d$ and rotational symmetry of the sphere. In
this work, we demonstrate that group equivariant convolutional operations can
naturally be incorporated into learned reconstruction methods for inverse
problems that are motivated by the variational regularisation approach. Indeed,
if the regularisation functional is invariant under a group symmetry, the
corresponding proximal operator will satisfy an equivariance property with
respect to the same group symmetry. As a result of this observation, we design
learned iterative methods in which the proximal operators are modelled as group
equivariant convolutional neural networks. We use roto-translationally
equivariant operations in the proposed methodology and apply it to the problems
of low-dose computerised tomography reconstruction and subsampled magnetic
resonance imaging reconstruction. The proposed methodology is demonstrated to
improve the reconstruction quality of a learned reconstruction method with a
little extra computational cost at training time but without any extra cost at
test time.
Related papers
- Approximate Equivariance in Reinforcement Learning [35.04248486334824]
Equivariant neural networks have shown great success in reinforcement learning.
In many problems, only approximate symmetry is present, which makes imposing exact symmetry inappropriate.
We develop approximately equivariant algorithms in reinforcement learning.
arXiv Detail & Related papers (2024-11-06T19:44:46Z) - Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Variational Inference Failures Under Model Symmetries: Permutation Invariant Posteriors for Bayesian Neural Networks [43.88179780450706]
We investigate the impact of weight space permutation symmetries on variational inference.
We devise a symmetric symmetrization mechanism for constructing permutation invariant variational posteriors.
We show that the symmetrized distribution has a strictly better fit to the true posterior, and that it can be trained using the original ELBO objective.
arXiv Detail & Related papers (2024-08-10T09:06:34Z) - Stochastic Neural Network Symmetrisation in Markov Categories [2.0668277618112203]
We consider the problem of symmetrising a neural network along a group homomorphism.
We obtain a flexible, compositional, and generic framework for symmetrisation.
arXiv Detail & Related papers (2024-06-17T17:54:42Z) - Symmetry Breaking and Equivariant Neural Networks [17.740760773905986]
We introduce a novel notion of'relaxed equiinjection'
We show how to incorporate this relaxation into equivariant multilayer perceptronrons (E-MLPs)
The relevance of symmetry breaking is then discussed in various application domains.
arXiv Detail & Related papers (2023-12-14T15:06:48Z) - Affine Invariance in Continuous-Domain Convolutional Neural Networks [6.019182604573028]
This research studies affine invariance on continuous-domain convolutional neural networks.
We introduce a new criterion to assess the similarity of two input signals under affine transformations.
Our research could eventually extend the scope of geometrical transformations that practical deep-learning pipelines can handle.
arXiv Detail & Related papers (2023-11-13T14:17:57Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.