External Division of Two Bregman Proximity Operators for Poisson Inverse Problems
- URL: http://arxiv.org/abs/2602.11482v1
- Date: Thu, 12 Feb 2026 01:50:05 GMT
- Title: External Division of Two Bregman Proximity Operators for Poisson Inverse Problems
- Authors: Kazuki Haishima, Kyohei Suzuki, Konstantinos Slavakis,
- Abstract summary: An operator defined via the external division of two Bregman proximity operators is introduced to promote sparse solutions.<n>This operator is embedded into the already established NoLips algorithm, replacing the standard Bregman proximity operator in a plug-and-play manner.<n> Numerical tests show that the proposed method exhibits more stable convergence behavior than conventional Kullback-Leibler (KL)-based approaches.
- Score: 6.591467426386097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel method for recovering sparse vectors from linear models corrupted by Poisson noise. The contribution is twofold. First, an operator defined via the external division of two Bregman proximity operators is introduced to promote sparse solutions while mitigating the estimation bias induced by classical $\ell_1$-norm regularization. This operator is then embedded into the already established NoLips algorithm, replacing the standard Bregman proximity operator in a plug-and-play manner. Second, the geometric structure of the proposed external-division operator is elucidated through two complementary reformulations, which provide clear interpretations in terms of the primal and dual spaces of the Poisson inverse problem. Numerical tests show that the proposed method exhibits more stable convergence behavior than conventional Kullback-Leibler (KL)-based approaches and achieves significantly superior performance on synthetic data and an image restoration problem.
Related papers
- Regularized Online RLHF with Generalized Bilinear Preferences [68.44113000390544]
We consider the problem of contextual online RLHF with general preferences.<n>We adopt the Generalized Bilinear Preference Model to capture preferences via low-rank, skew-symmetric matrices.<n>We prove that the dual gap of the greedy policy is bounded by the square of the estimation error.
arXiv Detail & Related papers (2026-02-26T15:27:53Z) - Leveraging Operator Learning to Accelerate Convergence of the Preconditioned Conjugate Gradient Method [1.8434042562191815]
We propose a new deflation strategy to accelerate the convergence of the preconditioned conjugate gradient(PCG) method.<n>We generate the deflation subspaces using operator learning, specifically the Deep Operator Network(DeepONet)
arXiv Detail & Related papers (2025-07-31T18:53:23Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Robust Low-Rank Matrix Completion via a New Sparsity-Inducing
Regularizer [30.920908325825668]
This paper presents a novel loss function to as hybrid ordinary-Welsch (HOW) and a new sparsity-inducing matrix problem solver.
arXiv Detail & Related papers (2023-10-07T09:47:55Z) - Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse
Problems [8.673558396669806]
Plug-noise-and-Play (Play) methods are efficient iterative algorithms for solving illposed image inverse problems.
We propose two.
algorithms based on the Bregman Score gradient Denoise inverse problems.
arXiv Detail & Related papers (2023-06-06T07:36:47Z) - Extragradient-Type Methods with $\mathcal{O} (1/k)$ Last-Iterate
Convergence Rates for Co-Hypomonotone Inclusions [8.0153031008486]
We develop two "Nesterov's accelerated" variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion.
Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems.
arXiv Detail & Related papers (2023-02-08T14:47:34Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Learning Dynamical Systems via Koopman Operator Regression in
Reproducing Kernel Hilbert Spaces [52.35063796758121]
We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system.
We link the risk with the estimation of the spectral decomposition of the Koopman operator.
Our results suggest RRR might be beneficial over other widely used estimators.
arXiv Detail & Related papers (2022-05-27T14:57:48Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Barycentric-alignment and reconstruction loss minimization for domain
generalization [30.459247038765568]
This paper advances the theory and practice of Domain Generalization (DG) in machine learning.
We propose a novel DG algorithm named Wasserstein Barycenter Auto-Encoder (WBAE) that simultaneously minimizes the classification loss, the barycenter loss, and the reconstruction loss.
Numerical results demonstrate that the proposed method outperforms current state-of-the-art DG algorithms on several datasets.
arXiv Detail & Related papers (2021-09-04T16:57:36Z) - Optimal transport with $f$-divergence regularization and generalized
Sinkhorn algorithm [0.0]
Entropic regularization provides a generalization of the original optimal transport problem.
replacing the Kullback-Leibler divergence with a general $f$-divergence leads to a natural generalization.
We propose a practical algorithm for computing the regularized optimal transport cost and its gradient.
arXiv Detail & Related papers (2021-05-29T16:37:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.