Stochastic Primal-Dual Three Operator Splitting Algorithm with Extension to Equivariant Regularization-by-Denoising
- URL: http://arxiv.org/abs/2208.01631v2
- Date: Fri, 06 Dec 2024 15:41:07 GMT
- Title: Stochastic Primal-Dual Three Operator Splitting Algorithm with Extension to Equivariant Regularization-by-Denoising
- Authors: Junqi Tang, Matthias Ehrhardt, Carola-Bibiane Schönlieb,
- Abstract summary: We propose a primal-dual three-operator splitting algorithm (TOS-SPDHG) for solving a class of convex three-composite optimization problems.
We provide theoretical convergence analysis showing ergodic $O (1/K)$ convergence rate, and demonstrate the effectiveness of our approach in imaging inverse problems.
We also propose TOS-SPDHG-RED and TOS-SPDHG-eRED which utilize the regularization-by-denoising framework to leverage pretrained deep denoising networks as priors.
- Score: 12.187438033643797
- License:
- Abstract: In this work we propose a stochastic primal-dual three-operator splitting algorithm (TOS-SPDHG) for solving a class of convex three-composite optimization problems. Our proposed scheme is a direct three-operator splitting extension of the SPDHG algorithm [Chambolle et al. 2018]. We provide theoretical convergence analysis showing ergodic $O(1/K)$ convergence rate, and demonstrate the effectiveness of our approach in imaging inverse problems. Moreover, we further propose TOS-SPDHG-RED and TOS-SPDHG-eRED which utilizes the regularization-by-denoising (RED) framework to leverage pretrained deep denoising networks as priors.
Related papers
- Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for
Inverse Problem [97.64313409741614]
We prove fast mixing and characterize the stationary distribution of the Langevin Algorithm for inverting random weighted DNN generators.
We propose to do posterior sampling in the latent space of a pre-trained generative model.
arXiv Detail & Related papers (2022-06-18T03:47:37Z) - Learned Image Compression with Separate Hyperprior Decoders [19.14246055282486]
We propose to use three hyperprior decoders to separate the decoding process of the mixed parameters in discrete Gaussian mixture likelihoods.
The proposed method achieves on average 3.36% BD-rate reduction compared with state-of-the-art approach.
arXiv Detail & Related papers (2021-10-31T13:01:56Z) - On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms [33.96864594479152]
We analyze the convergence of prior-guided ZO algorithms under a greedy descent framework with various gradient estimators.
We also present a new accelerated random search (ARS) algorithm that incorporates prior information, together with a convergence analysis.
arXiv Detail & Related papers (2021-07-21T14:39:40Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - NOMA in UAV-aided cellular offloading: A machine learning approach [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T17:38:48Z) - Linear Convergent Decentralized Optimization with Compression [50.44269451541387]
Existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms.
Motivated by primal-dual algorithms, this paper proposes first underlineLinunderlineEAr convergent.
underlineDecentralized with compression, LEAD.
arXiv Detail & Related papers (2020-07-01T04:35:00Z) - SOAR: Second-Order Adversarial Regularization [29.83835336491924]
Adversarial training is a common approach to improving the robustness of deep neural networks against adversarial examples.
In this work, we propose a novel regularization approach as an alternative.
Our proposed second-order adversarial regularizer (SOAR) is an upper bound based on the Taylor approximation of the inner-max in the robust optimization objective.
arXiv Detail & Related papers (2020-04-04T01:35:07Z) - Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms [21.904012114713428]
We consider the sum of three convex functions, where the first one F is smooth, the second one is nonsmooth and proximable.
This template problem has many applications, for instance, in image processing and machine learning.
We propose a new primal-dual algorithm, which we call PDDY, for this problem.
arXiv Detail & Related papers (2020-04-03T10:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.