Untrained neural network embedded Fourier phase retrieval from few
measurements
- URL: http://arxiv.org/abs/2307.08717v1
- Date: Sun, 16 Jul 2023 16:23:50 GMT
- Title: Untrained neural network embedded Fourier phase retrieval from few
measurements
- Authors: Liyuan Ma and Hongxia Wang and Ningyi Leng and Ziyang Yuan
- Abstract summary: This paper proposes an untrained neural network embedded algorithm to solve FPR with few measurements.
We use a generative network to represent the image to be recovered, which confines the image to the space defined by the network structure.
To reduce the computational cost mainly caused by the parameter updates of the untrained NN, we develop an accelerated algorithm that adaptively trades off between explicit and implicit regularization.
- Score: 8.914156789222266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fourier phase retrieval (FPR) is a challenging task widely used in various
applications. It involves recovering an unknown signal from its Fourier
phaseless measurements. FPR with few measurements is important for reducing
time and hardware costs, but it suffers from serious ill-posedness. Recently,
untrained neural networks have offered new approaches by introducing learned
priors to alleviate the ill-posedness without requiring any external data.
However, they may not be ideal for reconstructing fine details in images and
can be computationally expensive. This paper proposes an untrained neural
network (NN) embedded algorithm based on the alternating direction method of
multipliers (ADMM) framework to solve FPR with few measurements. Specifically,
we use a generative network to represent the image to be recovered, which
confines the image to the space defined by the network structure. To improve
the ability to represent high-frequency information, total variation (TV)
regularization is imposed to facilitate the recovery of local structures in the
image. Furthermore, to reduce the computational cost mainly caused by the
parameter updates of the untrained NN, we develop an accelerated algorithm that
adaptively trades off between explicit and implicit regularization.
Experimental results indicate that the proposed algorithm outperforms existing
untrained NN-based algorithms with fewer computational resources and even
performs competitively against trained NN-based algorithms.
Related papers
- PRISTA-Net: Deep Iterative Shrinkage Thresholding Network for Coded
Diffraction Patterns Phase Retrieval [6.982256124089]
Phase retrieval is a challenge nonlinear inverse problem in computational imaging and image processing.
We have developed PRISTA-Net, a deep unfolding network based on the first-order iterative threshold threshold algorithm (ISTA)
All parameters in the proposed PRISTA-Net framework, including the nonlinear transformation, threshold, and step size, are learned-to-end instead of being set.
arXiv Detail & Related papers (2023-09-08T07:37:15Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Efficient Uncertainty Quantification and Reduction for
Over-Parameterized Neural Networks [23.7125322065694]
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine learning models.
We create statistically guaranteed schemes to principally emphcharacterize, and emphremove, the uncertainty of over- parameterized neural networks.
In particular, our approach, based on what we call a procedural-noise-correcting (PNC) predictor, removes the procedural uncertainty by using only emphone auxiliary network that is trained on a suitably labeled dataset.
arXiv Detail & Related papers (2023-06-09T05:15:53Z) - A Projection-Based K-space Transformer Network for Undersampled Radial
MRI Reconstruction with Limited Training Subjects [1.5708535232255898]
Non-Cartesian trajectories need to be transformed onto a Cartesian grid in each iteration of the network training.
We propose novel data augmentation methods to generate a large amount of training data from a limited number of subjects.
Experimental results show superior performance of the proposed framework compared to state-of-the-art deep neural networks.
arXiv Detail & Related papers (2022-06-15T00:20:22Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - A computationally efficient reconstruction algorithm for circular
cone-beam computed tomography using shallow neural networks [0.0]
We introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK) algorithm.
It adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency.
We show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy.
arXiv Detail & Related papers (2020-10-01T14:10:23Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.