Imaging with Equivariant Deep Learning
- URL: http://arxiv.org/abs/2209.01725v1
- Date: Mon, 5 Sep 2022 02:13:57 GMT
- Title: Imaging with Equivariant Deep Learning
- Authors: Dongdong Chen, Mike Davies, Matthias J. Ehrhardt, Carola-Bibiane
Sch\"onlieb, Ferdia Sherry, Juli\'an Tachella
- Abstract summary: We review the emerging field of equivariant imaging and show how it can provide improved generalization and new imaging opportunities.
We show the interplay between the acquisition physics and group actions and links to iterative reconstruction, blind compressed sensing and self-supervised learning.
- Score: 9.333799633608345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: From early image processing to modern computational imaging, successful
models and algorithms have relied on a fundamental property of natural signals:
symmetry. Here symmetry refers to the invariance property of signal sets to
transformations such as translation, rotation or scaling. Symmetry can also be
incorporated into deep neural networks in the form of equivariance, allowing
for more data-efficient learning. While there has been important advances in
the design of end-to-end equivariant networks for image classification in
recent years, computational imaging introduces unique challenges for
equivariant network solutions since we typically only observe the image through
some noisy ill-conditioned forward operator that itself may not be equivariant.
We review the emerging field of equivariant imaging and show how it can provide
improved generalization and new imaging opportunities. Along the way we show
the interplay between the acquisition physics and group actions and links to
iterative reconstruction, blind compressed sensing and self-supervised
learning.
Related papers
- Invariant Shape Representation Learning For Image Classification [41.610264291150706]
In this paper, we introduce a novel framework that for the first time develops invariant shape representation learning (ISRL)
Our model ISRL is designed to jointly capture invariant features in latent shape spaces parameterized by deformable transformations.
By embedding the features that are invariant with regard to target variables in different environments, our model consistently offers more accurate predictions.
arXiv Detail & Related papers (2024-11-19T03:39:43Z) - Equivariant plug-and-play image reconstruction [10.781078029828473]
Plug-and-play algorithms can leverage powerful pre-trained denoisers to solve inverse imaging problems.
We show that enforcing equivariance to certain groups of transformations on the denoiser improves the stability of the algorithm as well as its reconstruction quality.
Experiments on multiple imaging modalities and denoising networks show that the equivariant plug-and-play algorithm improves both the reconstruction performance and the stability compared to their non-equivariant counterparts.
arXiv Detail & Related papers (2023-12-04T12:07:39Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks [63.24965775030674]
This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
arXiv Detail & Related papers (2023-04-18T18:06:35Z) - The Lie Derivative for Measuring Learned Equivariance [84.29366874540217]
We study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures.
We find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities.
For example, transformers can be more equivariant than convolutional neural networks after training.
arXiv Detail & Related papers (2022-10-06T15:20:55Z) - Leveraging Equivariant Features for Absolute Pose Regression [9.30597356471664]
We show that a translation and rotation equivariant Convolutional Neural Network directly induces representations of camera motions into the feature space.
We then show that this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations.
arXiv Detail & Related papers (2022-04-05T12:44:20Z) - Equivariant neural networks for inverse problems [1.7942265700058986]
We show that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods.
We design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks.
arXiv Detail & Related papers (2021-02-23T05:38:41Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z) - Group Equivariant Generative Adversarial Networks [7.734726150561089]
In this work, we explicitly incorporate inductive symmetry priors into the network architectures via group-equivariant convolutional networks.
Group-convariants have higher expressive power with fewer samples and lead to better gradient feedback between generator and discriminator.
arXiv Detail & Related papers (2020-05-04T17:38:49Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.