Convergence Guarantees of Overparametrized Wide Deep Inverse Prior
- URL: http://arxiv.org/abs/2303.11265v1
- Date: Mon, 20 Mar 2023 16:49:40 GMT
- Title: Convergence Guarantees of Overparametrized Wide Deep Inverse Prior
- Authors: Nathan Buskulic, Yvain Qu\'eau, Jalal Fadili
- Abstract summary: Inverse Priors is an unsupervised approach to transform a random input into an object whose image under the forward model matches the observation.
We provide overparametrization bounds under which such network trained via continuous-time gradient descent will converge exponentially fast with high probability.
This work is thus a first step towards a theoretical understanding of overparametrized DIP networks, and more broadly it participates to the theoretical understanding of neural networks in inverse problem settings.
- Score: 1.5362025549031046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have become a prominent approach to solve inverse problems in
recent years. Amongst the different existing methods, the Deep Image/Inverse
Priors (DIPs) technique is an unsupervised approach that optimizes a highly
overparametrized neural network to transform a random input into an object
whose image under the forward model matches the observation. However, the level
of overparametrization necessary for such methods remains an open problem. In
this work, we aim to investigate this question for a two-layers neural network
with a smooth activation function. We provide overparametrization bounds under
which such network trained via continuous-time gradient descent will converge
exponentially fast with high probability which allows to derive recovery
prediction bounds. This work is thus a first step towards a theoretical
understanding of overparametrized DIP networks, and more broadly it
participates to the theoretical understanding of neural networks in inverse
problem settings.
Related papers
- Rotation Equivariant Proximal Operator for Deep Unfolding Methods in Image Restoration [62.41329042683779]
We propose a high-accuracy rotation equivariant proximal network that embeds rotation symmetry priors into the deep unfolding framework.
This study makes efforts to suggest a high-accuracy rotation equivariant proximal network that effectively embeds rotation symmetry priors into the deep unfolding framework.
arXiv Detail & Related papers (2023-12-25T11:53:06Z) - Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems [2.6695224599322214]
We provide deterministic convergence and recovery guarantees for the class of unsupervised feedforward multilayer neural networks trained to solve inverse problems.
We also derive overparametrization bounds under which a two-layers Deep Inverse Prior network with smooth activation function will benefit from our guarantees.
arXiv Detail & Related papers (2023-09-21T14:48:02Z) - Correlative Information Maximization: A Biologically Plausible Approach
to Supervised Deep Neural Networks without Weight Symmetry [43.584567991256925]
We propose a new normative approach to describe the signal propagation in biological neural networks in both forward and backward directions.
This framework addresses many concerns about the biological-plausibility of conventional artificial neural networks and the backpropagation algorithm.
Our approach provides a natural resolution to the weight symmetry problem between forward and backward signal propagation paths.
arXiv Detail & Related papers (2023-06-07T22:14:33Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Imbedding Deep Neural Networks [0.0]
Continuous depth neural networks, such as Neural ODEs, have refashioned the understanding of residual neural networks in terms of non-linear vector-valued optimal control problems.
We propose a new approach which explicates the network's depth' as a fundamental variable, thus reducing the problem to a system of forward-facing initial value problems.
arXiv Detail & Related papers (2022-01-31T22:00:41Z) - On the Explicit Role of Initialization on the Convergence and Implicit
Bias of Overparametrized Linear Networks [1.0323063834827415]
We present a novel analysis of single-hidden-layer linear networks trained under gradient flow.
We show that the squared loss converges exponentially to its optimum.
We derive a novel non-asymptotic upper-bound on the distance between the trained network and the min-norm solution.
arXiv Detail & Related papers (2021-05-13T15:13:51Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Semi-Implicit Back Propagation [1.5533842336139065]
We propose a semi-implicit back propagation method for neural network training.
The difference on the neurons are propagated in a backward fashion and the parameters are updated with proximal mapping.
Experiments on both MNIST and CIFAR-10 demonstrate that the proposed algorithm leads to better performance in terms of both loss decreasing and training/validation accuracy.
arXiv Detail & Related papers (2020-02-10T03:26:09Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.