Learning convex regularizers satisfying the variational source condition
for inverse problems
- URL: http://arxiv.org/abs/2110.12520v1
- Date: Sun, 24 Oct 2021 20:05:59 GMT
- Title: Learning convex regularizers satisfying the variational source condition
for inverse problems
- Authors: Subhadip Mukherjee, Carola-Bibiane Sch\"onlieb, and Martin Burger
- Abstract summary: We propose adversarial convex regularization (ACR) to learn data-driven convex regularizers via adversarial training.
We leverage the variational source condition (SC) during training to enforce that the ground-truth images minimize the variational loss corresponding to the learned convex regularizer.
The resulting regularizer (ACR-SC) performs on par with the ACR, but unlike ACR, comes with a quantitative convergence rate estimate.
- Score: 4.2917182054051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational regularization has remained one of the most successful approaches
for reconstruction in imaging inverse problems for several decades. With the
emergence and astonishing success of deep learning in recent years, a
considerable amount of research has gone into data-driven modeling of the
regularizer in the variational setting. Our work extends a recently proposed
method, referred to as adversarial convex regularization (ACR), that seeks to
learn data-driven convex regularizers via adversarial training in an attempt to
combine the power of data with the classical convex regularization theory.
Specifically, we leverage the variational source condition (SC) during training
to enforce that the ground-truth images minimize the variational loss
corresponding to the learned convex regularizer. This is achieved by adding an
appropriate penalty term to the ACR training objective. The resulting
regularizer (abbreviated as ACR-SC) performs on par with the ACR, but unlike
ACR, comes with a quantitative convergence rate estimate.
Related papers
- Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - Dynamic Residual Classifier for Class Incremental Learning [4.02487511510606]
With imbalanced sample numbers between old and new classes, the learning can be biased.
Existing CIL methods exploit the longtailed (LT) recognition techniques, e.g., the adjusted losses and the data re-sampling methods.
A novel Dynamic Residual adaptation (DRC) is proposed to handle this challenging scenario.
arXiv Detail & Related papers (2023-08-25T11:07:11Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Generalized Real-World Super-Resolution through Adversarial Robustness [107.02188934602802]
We present Robust Super-Resolution, a method that leverages the generalization capability of adversarial attacks to tackle real-world SR.
Our novel framework poses a paradigm shift in the development of real-world SR methods.
By using a single robust model, we outperform state-of-the-art specialized methods on real-world benchmarks.
arXiv Detail & Related papers (2021-08-25T22:43:20Z) - Parallelized Reverse Curriculum Generation [62.25453821794469]
For reinforcement learning, it is challenging for an agent to master a task that requires a specific series of actions due to sparse rewards.
reverse curriculum generation (RCG) provides a reverse expansion approach that automatically generates a curriculum for the agent to learn.
We propose a parallelized approach that simultaneously trains multiple AC pairs and periodically exchanges their critics.
arXiv Detail & Related papers (2021-08-04T15:58:35Z) - End-to-end reconstruction meets data-driven regularization for inverse
problems [2.800608984818919]
We propose an unsupervised approach for learning end-to-end reconstruction operators for ill-posed inverse problems.
The proposed method combines the classical variational framework with iterative unrolling.
We demonstrate with the example of X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-06-07T12:05:06Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Supervised Learning of Sparsity-Promoting Regularizers for Denoising [13.203765985718205]
We present a method for supervised learning of sparsity-promoting regularizers for image denoising.
Our experiments show that the proposed method can learn an operator that outperforms well-known regularizers.
arXiv Detail & Related papers (2020-06-09T21:38:05Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.