What is Wrong with End-to-End Learning for Phase Retrieval?
- URL: http://arxiv.org/abs/2403.15448v1
- Date: Mon, 18 Mar 2024 03:01:53 GMT
- Title: What is Wrong with End-to-End Learning for Phase Retrieval?
- Authors: Wenjie Zhang, Yuxiang Wan, Zhong Zhuang, Ju Sun,
- Abstract summary: We show how symmetries in the forward model can cause learning difficulties when data-driven deep learning approaches are used to solve such problems.
We show how to overcome them by preprocessing the training set before any learning, i.e., symmetry breaking.
- Score: 6.464093417537727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For nonlinear inverse problems that are prevalent in imaging science, symmetries in the forward model are common. When data-driven deep learning approaches are used to solve such problems, these intrinsic symmetries can cause substantial learning difficulties. In this paper, we explain how such difficulties arise and, more importantly, how to overcome them by preprocessing the training set before any learning, i.e., symmetry breaking. We take far-field phase retrieval (FFPR), which is central to many areas of scientific imaging, as an example and show that symmetric breaking can substantially improve data-driven learning. We also formulate the mathematical principle of symmetry breaking.
Related papers
- Symmetry Considerations for Learning Task Symmetric Robot Policies [12.856889419651521]
Symmetry is a fundamental aspect of many real-world robotic tasks.
Current deep reinforcement learning (DRL) approaches can seldom harness and exploit symmetry effectively.
arXiv Detail & Related papers (2024-03-07T09:41:11Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - Symmetry Induces Structure and Constraint of Learning [0.0]
We unveil the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models.
Common instances of mirror symmetries in deep learning include rescaling, rotation, and permutation symmetry.
We show that the theoretical framework can explain intriguing phenomena, such as the loss of plasticity and various collapse phenomena in neural networks.
arXiv Detail & Related papers (2023-09-29T02:21:31Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Physics Embedded Machine Learning for Electromagnetic Data Imaging [83.27424953663986]
Electromagnetic (EM) imaging is widely applied in sensing for security, biomedicine, geophysics, and various industries.
It is an ill-posed inverse problem whose solution is usually computationally expensive. Machine learning (ML) techniques and especially deep learning (DL) show potential in fast and accurate imaging.
This article surveys various schemes to incorporate physics in learning-based EM imaging.
arXiv Detail & Related papers (2022-07-26T02:10:15Z) - SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data [77.53134858717728]
We build on the strengths of recent advances in neural reconstruction and rendering such as Neural Radiance Fields (NeRF)
We apply a soft symmetry constraint to the 3D geometry and material properties, having factored appearance into lighting, albedo colour and reflectivity.
We show that it can reconstruct unobserved regions with high fidelity and render high-quality novel view images.
arXiv Detail & Related papers (2022-06-13T17:37:50Z) - Neural Bregman Divergences for Distance Learning [60.375385370556145]
We propose a new approach to learning arbitrary Bregman divergences in a differentiable manner via input convex neural networks.
We show that our method more faithfully learns divergences over a set of both new and previously studied tasks.
Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification.
arXiv Detail & Related papers (2022-06-09T20:53:15Z) - Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature
Learning and Lazy Training [4.318555434063275]
Deep learning algorithms are responsible for a technological revolution in a variety of tasks including image recognition or Go playing.
Yet, why they work is not understood. Ultimately, they manage to classify data lying in high dimension -- a feat generically impossible.
We argue that different learning regimes can be organized into a phase diagram.
arXiv Detail & Related papers (2020-12-30T11:00:36Z) - From Symmetry to Geometry: Tractable Nonconvex Problems [20.051126124841076]
We discuss the role of curvature in the landscape and the different roles of symmetries.
This is rich with observed phenomena open problems; we close by directions for future research.
arXiv Detail & Related papers (2020-07-14T01:19:15Z) - Deep Learning Techniques for Inverse Problems in Imaging [102.30524824234264]
Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems.
We present a taxonomy that can be used to categorize different problems and reconstruction methods.
arXiv Detail & Related papers (2020-05-12T18:35:55Z) - Inverse Problems, Deep Learning, and Symmetry Breaking [6.54545059421233]
In many physical systems, inputs related by intrinsic system symmetries are mapped to the same output.
We show that careful symmetry breaking on the training data can help get rid of the difficulties and significantly improve the learning performance.
arXiv Detail & Related papers (2020-03-20T02:43:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.