DeepInit Phase Retrieval
- URL: http://arxiv.org/abs/2007.08214v1
- Date: Thu, 16 Jul 2020 09:39:28 GMT
- Title: DeepInit Phase Retrieval
- Authors: Martin Reiche and Peter Jung
- Abstract summary: This paper shows how data deep generative models can be utilized to solve challenging phase retrieval problems.
It shows that our hybrid approach is able to deliver very high reconstruction results at low sampling rates.
- Score: 10.385009647156407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper shows how data-driven deep generative models can be utilized to
solve challenging phase retrieval problems, in which one wants to reconstruct a
signal from only few intensity measurements. Classical iterative algorithms are
known to work well if initialized close to the optimum but otherwise suffer
from non-convexity and often get stuck in local minima. We therefore propose
DeepInit Phase Retrieval, which uses regularized gradient descent under a deep
generative data prior to compute a trained initialization for a fast classical
algorithm (e.g. the randomized Kaczmarz method). We empirically show that our
hybrid approach is able to deliver very high reconstruction results at low
sampling rates even when there is significant generator model error.
Conceptually, learned initializations may therefore help to overcome the
non-convexity of the problem by starting classical descent steps closer to the
global optimum. Also, our idea demonstrates superior runtime performance over
conventional gradient-based reconstruction methods. We evaluate our method for
generic measurements and show empirically that it is also applicable to
diffraction-type measurement models which are found in terahertz single-pixel
phase retrieval.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Using Intermediate Forward Iterates for Intermediate Generator
Optimization [14.987013151525368]
Intermediate Generator Optimization can be incorporated into any standard autoencoder pipeline for the generative task.
We show applications of the IGO on two dense predictive tasks viz., image extrapolation, and point cloud denoising.
arXiv Detail & Related papers (2023-02-05T08:46:15Z) - Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods [0.6554326244334866]
Sharpness is an assumption in continuous optimization that bounds the distance from minima by objective function suboptimality.
Sharpness involves problem-specific constants that are typically unknown, and restart schemes typically reduce convergence rates.
We consider the assumption of approximate sharpness, a generalization of sharpness that incorporates an unknown constant perturbation to the objective function error.
arXiv Detail & Related papers (2023-01-05T19:01:41Z) - An adjoint-free algorithm for conditional nonlinear optimal perturbations (CNOPs) via sampling [5.758073912084367]
We propose a sampling algorithm based on state-of-the-art statistical machine learning techniques to obtain conditional nonlinear optimal perturbations (CNOPs)
The sampling approach directly reduces the gradient to the objective function value (zeroth-order information)
We demonstrate the CNOPs obtained with their spatial patterns, objective values, quantifying computation times, and nonlinear error growth.
arXiv Detail & Related papers (2022-08-01T16:07:22Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Robust Regression Revisited: Acceleration and Improved Estimation Rates [25.54653340884806]
We study fast algorithms for statistical regression problems under the strong contamination model.
The goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples.
We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees.
arXiv Detail & Related papers (2021-06-22T17:21:56Z) - Data-driven Weight Initialization with Sylvester Solvers [72.11163104763071]
We propose a data-driven scheme to initialize the parameters of a deep neural network.
We show that our proposed method is especially effective in few-shot and fine-tuning settings.
arXiv Detail & Related papers (2021-05-02T07:33:16Z) - Unfolded Algorithms for Deep Phase Retrieval [16.14838937433809]
We propose a hybrid model-based data-driven deep architecture, referred to as Unfolded Phase Retrieval (UPR)
The proposed method benefits from versatility and interpretability of well-established model-based algorithms.
We consider a joint design of the sensing matrix and the signal processing algorithm and utilize the deep unfolding technique in the process.
arXiv Detail & Related papers (2020-12-21T03:46:17Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.