Deep Neural-network Prior for Orbit Recovery from Method of Moments
- URL: http://arxiv.org/abs/2304.14604v2
- Date: Tue, 30 Jan 2024 05:14:02 GMT
- Title: Deep Neural-network Prior for Orbit Recovery from Method of Moments
- Authors: Yuehaw Khoo, Sounak Paul and Nir Sharon
- Abstract summary: Two particular orbit recovery problems of interest in this paper are multireference alignment and single-particle cryo-EM modelling.
In order to suppress the noise, we suggest using the method of moments approach for both problems while introducing deep neural network priors.
- Score: 1.4579344926652844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Orbit recovery problems are a class of problems that often arise in practice
and various forms. In these problems, we aim to estimate an unknown function
after being distorted by a group action and observed via a known operator.
Typically, the observations are contaminated with a non-trivial level of noise.
Two particular orbit recovery problems of interest in this paper are
multireference alignment and single-particle cryo-EM modelling. In order to
suppress the noise, we suggest using the method of moments approach for both
problems while introducing deep neural network priors. In particular, our
neural networks should output the signals and the distribution of group
elements, with moments being the input. In the multireference alignment case,
we demonstrate the advantage of using the NN to accelerate the convergence for
the reconstruction of signals from the moments. Finally, we use our method to
reconstruct simulated and biological volumes in the cryo-EM setting.
Related papers
- Exact Enforcement of Temporal Continuity in Sequential Physics-Informed
Neural Networks [0.0]
We introduce a method to enforce continuity between successive time segments via a solution ansatz.
The method is tested for a number of benchmark problems involving both linear and non-linear PDEs.
The numerical experiments conducted with the proposed method demonstrated superior convergence and accuracy over both traditional PINNs and the soft-constrained counterparts.
arXiv Detail & Related papers (2024-02-15T17:41:02Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Neural Network Warm-Start Approach for the Inverse Acoustic Obstacle
Scattering Problem [7.624866197576227]
We present a neural network warm-start approach for solving the inverse scattering problem.
An initial guess for the optimization problem is obtained using a trained neural network.
The algorithm remains robust to noise in the scattered field measurements and also converges to the true solution for limited aperture data.
arXiv Detail & Related papers (2022-12-16T22:18:48Z) - A Neural-Network-Based Convex Regularizer for Inverse Problems [14.571246114579468]
Deep-learning methods to solve image-reconstruction problems have enabled a significant increase in reconstruction quality.
These new methods often lack reliability and explainability, and there is a growing interest to address these shortcomings.
In this work, we tackle this issue by revisiting regularizers that are the sum of convex-ridge functions.
The gradient of such regularizers is parameterized by a neural network that has a single hidden layer with increasing and learnable activation functions.
arXiv Detail & Related papers (2022-11-22T18:19:10Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - Denoising Score-Matching for Uncertainty Quantification in Inverse
Problems [1.521936393554569]
We propose a generic Bayesian framework forsolving inverse problems, in which we limit the use of deep neural networks tolearning a prior distribution on the signals to recover.
We apply this framework to Magnetic ResonanceImage (MRI) reconstruction and illustrate how this approach can also be used to assess the uncertainty on particularfeatures of a reconstructed image.
arXiv Detail & Related papers (2020-11-16T18:33:06Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z) - Theory inspired deep network for instantaneous-frequency extraction and
signal components recovery from discrete blind-source data [1.6758573326215689]
This paper is concerned with the inverse problem of recovering the unknown signal components, along with extraction of their frequencies.
None of the existing decomposition methods and algorithms is capable of solving this inverse problem.
We propose a synthesis of a deep neural network, based directly on a discrete sample set, that may be non-uniformly sampled, of the blind-source signal.
arXiv Detail & Related papers (2020-01-31T18:54:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.