You KAN Do It in a Single Shot: Plug-and-Play Methods with Single-Instance Priors
- URL: http://arxiv.org/abs/2412.06204v1
- Date: Mon, 09 Dec 2024 04:55:18 GMT
- Title: You KAN Do It in a Single Shot: Plug-and-Play Methods with Single-Instance Priors
- Authors: Yanqi Cheng, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero,
- Abstract summary: We introduce KAN-Play, an optimisation framework that incorporates Kologorov- Networks (KANs) as denoisers.
KAN-Play is specifically designed to solve problems with single-instance inverse priors, where only a single noisy observation is available.
- Score: 10.726369475010818
- License:
- Abstract: The use of Plug-and-Play (PnP) methods has become a central approach for solving inverse problems, with denoisers serving as regularising priors that guide optimisation towards a clean solution. In this work, we introduce KAN-PnP, an optimisation framework that incorporates Kolmogorov-Arnold Networks (KANs) as denoisers within the Plug-and-Play (PnP) paradigm. KAN-PnP is specifically designed to solve inverse problems with single-instance priors, where only a single noisy observation is available, eliminating the need for large datasets typically required by traditional denoising methods. We show that KANs, based on the Kolmogorov-Arnold representation theorem, serve effectively as priors in such settings, providing a robust approach to denoising. We prove that the KAN denoiser is Lipschitz continuous, ensuring stability and convergence in optimisation algorithms like PnP-ADMM, even in the context of single-shot learning. Additionally, we provide theoretical guarantees for KAN-PnP, demonstrating its convergence under key conditions: the convexity of the data fidelity term, Lipschitz continuity of the denoiser, and boundedness of the regularisation functional. These conditions are crucial for stable and reliable optimisation. Our experimental results show, on super-resolution and joint optimisation, that KAN-PnP outperforms exiting methods, delivering superior performance in single-shot learning with minimal data. The method exhibits strong convergence properties, achieving high accuracy with fewer iterations.
Related papers
- An Efficient Difference-of-Convex Solver for Privacy Funnel [3.069335774032178]
We propose an efficient solver for the privacy funnel (PF) method.
The proposed DC separation results in a closed-form update equation.
We evaluate the proposed solver with MNIST and Fashion datasets.
arXiv Detail & Related papers (2024-03-02T01:05:25Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Single-Shot Plug-and-Play Methods for Inverse Problems [24.48841512811108]
Plug-and-Play priors in inverse problems have become increasingly prominent in recent years.
Existing models predominantly rely on pre-trained denoisers using large datasets.
In this work, we introduce Single-Shot perturbative methods, shifting the focus to solving inverse problems with minimal data.
arXiv Detail & Related papers (2023-11-22T20:31:33Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - A Corrected Expected Improvement Acquisition Function Under Noisy
Observations [22.63212972670109]
Sequential of expected improvement (EI) is one of the most widely used policies in Bayesian optimization.
The uncertainty associated with the incumbent solution is often neglected in many analytic EI-type methods.
We propose a modification of EI that corrects its closed-form expression by incorporating the covariance information provided by the Gaussian Process (GP) model.
arXiv Detail & Related papers (2023-10-08T13:50:39Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Provably Convergent Plug-and-Play Quasi-Newton Methods [5.9974035827998655]
We propose an efficient method to combine fidelity terms and deep denoisers.
We show that the proposed quasi-Newton algorithm is critical points of a weakly convex function.
Experiments on imageblurring and super-resolution demonstrate faster convergence as compared to other provable deM methods.
arXiv Detail & Related papers (2023-03-09T20:09:15Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Fixed-Point and Objective Convergence of Plug-and-Play Algorithms [25.65350839936094]
A standard model for image reconstruction involves the reconstruction of a data-fidelity novelty term along with a regularizer.
In this paper, we establish both forms of convergence for a special proximal linear denoisers.
We work with a special inner product (and norm) derived from the linear denoiser.
arXiv Detail & Related papers (2021-04-21T04:25:17Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.