Equivariant neural networks for inverse problems
- URL: http://arxiv.org/abs/2102.11504v1
- Date: Tue, 23 Feb 2021 05:38:41 GMT
- Title: Equivariant neural networks for inverse problems
- Authors: Elena Celledoni, Matthias J. Ehrhardt, Christian Etmann, Brynjulf
Owren, Carola-Bibiane Sch\"onlieb and Ferdia Sherry
- Abstract summary: We show that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods.
We design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks.
- Score: 1.7942265700058986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years the use of convolutional layers to encode an inductive bias
(translational equivariance) in neural networks has proven to be a very
fruitful idea. The successes of this approach have motivated a line of research
into incorporating other symmetries into deep learning methods, in the form of
group equivariant convolutional neural networks. Much of this work has been
focused on roto-translational symmetry of $\mathbf R^d$, but other examples are
the scaling symmetry of $\mathbf R^d$ and rotational symmetry of the sphere. In
this work, we demonstrate that group equivariant convolutional operations can
naturally be incorporated into learned reconstruction methods for inverse
problems that are motivated by the variational regularisation approach. Indeed,
if the regularisation functional is invariant under a group symmetry, the
corresponding proximal operator will satisfy an equivariance property with
respect to the same group symmetry. As a result of this observation, we design
learned iterative methods in which the proximal operators are modelled as group
equivariant convolutional neural networks. We use roto-translationally
equivariant operations in the proposed methodology and apply it to the problems
of low-dose computerised tomography reconstruction and subsampled magnetic
resonance imaging reconstruction. The proposed methodology is demonstrated to
improve the reconstruction quality of a learned reconstruction method with a
little extra computational cost at training time but without any extra cost at
test time.
Related papers
- Stochastic Neural Network Symmetrisation in Markov Categories [2.0668277618112203]
We consider the problem of symmetrising a neural network along a group homomorphism.
We obtain a flexible, compositional, and generic framework for symmetrisation.
arXiv Detail & Related papers (2024-06-17T17:54:42Z) - Symmetry Breaking and Equivariant Neural Networks [17.740760773905986]
We introduce a novel notion of'relaxed equiinjection'
We show how to incorporate this relaxation into equivariant multilayer perceptronrons (E-MLPs)
The relevance of symmetry breaking is then discussed in various application domains.
arXiv Detail & Related papers (2023-12-14T15:06:48Z) - Affine Invariance in Continuous-Domain Convolutional Neural Networks [6.019182604573028]
This research studies affine invariance on continuous-domain convolutional neural networks.
We introduce a new criterion to assess the similarity of two input signals under affine transformations.
Our research could eventually extend the scope of geometrical transformations that practical deep-learning pipelines can handle.
arXiv Detail & Related papers (2023-11-13T14:17:57Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Generative Adversarial Symmetry Discovery [19.098785309131458]
LieGAN represents symmetry as interpretable Lie algebra basis and can discover various symmetries.
The learned symmetry can also be readily used in several existing equivariant neural networks to improve accuracy and generalization in prediction.
arXiv Detail & Related papers (2023-02-01T04:28:36Z) - Unifying O(3) Equivariant Neural Networks Design with Tensor-Network Formalism [12.008737454250463]
We propose using fusion diagrams, a technique widely employed in simulating SU($2$)-symmetric quantum many-body problems, to design new equivariant components for equivariant neural networks.
When applied to particles within a given local neighborhood, the resulting components, which we term "fusion blocks," serve as universal approximators of any continuous equivariant function.
Our approach, which combines tensor networks with equivariant neural networks, suggests a potentially fruitful direction for designing more expressive equivariant neural networks.
arXiv Detail & Related papers (2022-11-14T16:06:59Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.