Stochastic Primal-Dual Three Operator Splitting Algorithm with Extension to Equivariant Regularization-by-Denoising
- URL: http://arxiv.org/abs/2208.01631v3
- Date: Sat, 15 Mar 2025 12:42:16 GMT
- Title: Stochastic Primal-Dual Three Operator Splitting Algorithm with Extension to Equivariant Regularization-by-Denoising
- Authors: Junqi Tang, Matthias Ehrhardt, Carola-Bibiane Schönlieb,
- Abstract summary: We propose a primal-dual three-operator splitting algorithm (TOS-SPDHG) for solving a class of convex three-composite optimization problems.<n>We provide theoretical convergence analysis showing ergodic $O (1/K)$ convergence rate, and demonstrate the effectiveness of our approach in imaging inverse problems.<n>We also propose TOS-SPDHG-RED and TOS-SPDHG-eRED which utilize the regularization-by-denoising framework to leverage pretrained deep denoising networks as priors.
- Score: 12.187438033643797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we propose a stochastic primal-dual three-operator splitting algorithm (TOS-SPDHG) for solving a class of convex three-composite optimization problems. Our proposed scheme is a direct three-operator splitting extension of the SPDHG algorithm [Chambolle et al. 2018]. We provide theoretical convergence analysis showing ergodic $O(1/K)$ convergence rate, and demonstrate the effectiveness of our approach in imaging inverse problems. Moreover, we further propose TOS-SPDHG-RED and TOS-SPDHG-eRED which utilizes the regularization-by-denoising (RED) framework to leverage pretrained deep denoising networks as priors.
Related papers
- Regularized Projection Matrix Approximation with Applications to Community Detection [1.3761665705201904]
This paper introduces a regularized projection matrix approximation framework designed to recover cluster information from the affinity matrix.
We investigate three distinct penalty functions, each specifically tailored to address bounded, positive, and sparse scenarios.
Numerical experiments conducted on both synthetic and real-world datasets reveal that our regularized projection matrix approximation approach significantly outperforms state-of-the-art methods in clustering performance.
arXiv Detail & Related papers (2024-05-26T15:18:22Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Stochastic Variable Metric Proximal Gradient with variance reduction for
non-convex composite optimization [0.0]
3P-SP-IDER is a novel algorithm designed to solve finite sum non-backward logistic equations.
We show that 3P-SP-IDER extends some preconditioned and some Incremental Maximization algorithms to the case forward operator can not be computed in closed form.
We also discuss the role of some design parameters of 3P-SP-IDER in some application to inference in a regression model with random effects.
arXiv Detail & Related papers (2023-01-02T12:49:48Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - A Variational Approach for Joint Image Recovery and Feature Extraction
Based on Spatially-Varying Generalised Gaussian Models [13.952521992627847]
The joint problem of reconstruction / extraction optimisation is a challenging task in image processing.
It consists in performing, in a joint manner, the restoration of an image and the extraction of the image.
arXiv Detail & Related papers (2022-09-03T09:10:23Z) - Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for
Inverse Problem [97.64313409741614]
We prove fast mixing and characterize the stationary distribution of the Langevin Algorithm for inverting random weighted DNN generators.
We propose to do posterior sampling in the latent space of a pre-trained generative model.
arXiv Detail & Related papers (2022-06-18T03:47:37Z) - On the Complexity of a Practical Primal-Dual Coordinate Method [63.899427212054995]
We prove complexity bounds for primal-dual algorithm with random and coordinate descent (PURE-CD)
It has been shown to obtain good extrapolation for solving bi-max performance problems.
arXiv Detail & Related papers (2022-01-19T16:14:27Z) - Learned Image Compression with Separate Hyperprior Decoders [19.14246055282486]
We propose to use three hyperprior decoders to separate the decoding process of the mixed parameters in discrete Gaussian mixture likelihoods.
The proposed method achieves on average 3.36% BD-rate reduction compared with state-of-the-art approach.
arXiv Detail & Related papers (2021-10-31T13:01:56Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms [33.96864594479152]
We analyze the convergence of prior-guided ZO algorithms under a greedy descent framework with various gradient estimators.
We also present a new accelerated random search (ARS) algorithm that incorporates prior information, together with a convergence analysis.
arXiv Detail & Related papers (2021-07-21T14:39:40Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - NOMA in UAV-aided cellular offloading: A machine learning approach [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T17:38:48Z) - On the implementation of a global optimization method for mixed-variable
problems [0.30458514384586394]
The algorithm is based on the radial basis function of Gutmann and the metric response surface method of Regis and Shoemaker.
We propose several modifications aimed at generalizing and improving these two algorithms.
arXiv Detail & Related papers (2020-09-04T13:36:56Z) - Linear Convergent Decentralized Optimization with Compression [50.44269451541387]
Existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms.
Motivated by primal-dual algorithms, this paper proposes first underlineLinunderlineEAr convergent.
underlineDecentralized with compression, LEAD.
arXiv Detail & Related papers (2020-07-01T04:35:00Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - SOAR: Second-Order Adversarial Regularization [29.83835336491924]
Adversarial training is a common approach to improving the robustness of deep neural networks against adversarial examples.
In this work, we propose a novel regularization approach as an alternative.
Our proposed second-order adversarial regularizer (SOAR) is an upper bound based on the Taylor approximation of the inner-max in the robust optimization objective.
arXiv Detail & Related papers (2020-04-04T01:35:07Z) - Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms [21.904012114713428]
We consider the sum of three convex functions, where the first one F is smooth, the second one is nonsmooth and proximable.
This template problem has many applications, for instance, in image processing and machine learning.
We propose a new primal-dual algorithm, which we call PDDY, for this problem.
arXiv Detail & Related papers (2020-04-03T10:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.