A Guide to Stochastic Optimisation for Large-Scale Inverse Problems
- URL: http://arxiv.org/abs/2406.06342v2
- Date: Tue, 9 Jul 2024 17:41:56 GMT
- Title: A Guide to Stochastic Optimisation for Large-Scale Inverse Problems
- Authors: Matthias J. Ehrhardt, Zeljko Kereta, Jingwei Liang, Junqi Tang,
- Abstract summary: optimisation algorithms are the de facto standard for machine learning with large amounts of data.
We provide a comprehensive account of the state-of-the-art in optimisation from the viewpoint of inverse problems.
We focus on the challenges for optimisation that are unique and are not commonly encountered in machine learning.
- Score: 4.926711494319977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stochastic optimisation algorithms are the de facto standard for machine learning with large amounts of data. Handling only a subset of available data in each optimisation step dramatically reduces the per-iteration computational costs, while still ensuring significant progress towards the solution. Driven by the need to solve large-scale optimisation problems as efficiently as possible, the last decade has witnessed an explosion of research in this area. Leveraging the parallels between machine learning and inverse problems has allowed harnessing the power of this research wave for solving inverse problems. In this survey, we provide a comprehensive account of the state-of-the-art in stochastic optimisation from the viewpoint of inverse problems. We present algorithms with diverse modalities of problem randomisation and discuss the roles of variance reduction, acceleration, higher-order methods, and other algorithmic modifications, and compare theoretical results with practical behaviour. We focus on the potential and the challenges for stochastic optimisation that are unique to inverse imaging problems and are not commonly encountered in machine learning. We conclude the survey with illustrative examples from imaging problems to examine the advantages and disadvantages that this new generation of algorithms bring to the field of inverse problems.
Related papers
- Paired Autoencoders for Inverse Problems [3.355436702348694]
We consider the solution of nonlinear inverse problems where the forward problem is a discretization of a partial differential equation.
The main computational bottleneck of typical algorithms is the direct estimation of the data misfit.
We use a paired autoencoder framework as a likelihood-free estimator for inverse problems.
arXiv Detail & Related papers (2024-05-21T22:00:34Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - A Novel Plug-and-Play Approach for Adversarially Robust Generalization [38.72514422694518]
We propose a robust framework that employs adversarially robust training to safeguard the ML models against perturbed testing data.
Our contributions can be seen from both computational and statistical perspectives.
arXiv Detail & Related papers (2022-08-19T17:02:55Z) - A deep learning method for solving stochastic optimal control problems driven by fully-coupled FBSDEs [1.0703175070560689]
We first transform the problem into a Stackelberg differential game problem (leader-follower problem)
We compute two examples of the investment-consumption problem solved through utility models.
The results of both examples demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-04-12T13:31:19Z) - Instance-Dependent Confidence and Early Stopping for Reinforcement
Learning [99.57168572237421]
Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure.
This research provides guarantees that explain textitex post the performance differences observed.
A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice.
arXiv Detail & Related papers (2022-01-21T04:25:35Z) - Consistency analysis of bilevel data-driven learning in inverse problems [1.0705399532413618]
We consider the adaptive learning of the regularization parameter from data by means of optimization.
We demonstrate how to implement our framework on linear inverse problems.
Online numerical schemes are derived using the gradient descent method.
arXiv Detail & Related papers (2020-07-06T12:23:29Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.