Frequency-Driven Inverse Kernel Prediction for Single Image Defocus Deblurring
- URL: http://arxiv.org/abs/2508.12736v1
- Date: Mon, 18 Aug 2025 09:01:13 GMT
- Title: Frequency-Driven Inverse Kernel Prediction for Single Image Defocus Deblurring
- Authors: Ying Zhang, Xiongxin Tang, Chongyi Li, Qiao Chen, Yuquan Wu,
- Abstract summary: Single image defocus deblurring aims to recover an all-in-focus image from a defocus counterpart.<n>Most existing methods rely on spatial features for kernel estimation, but their performance degrades in severely blurry regions.<n>We propose a Frequency-Driven Inverse Kernel Prediction network (FDIKP) that incorporates frequency-domain representations to enhance structural identifiability in kernel modeling.
- Score: 25.716315158370747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image defocus deblurring aims to recover an all-in-focus image from a defocus counterpart, where accurately modeling spatially varying blur kernels remains a key challenge. Most existing methods rely on spatial features for kernel estimation, but their performance degrades in severely blurry regions where local high-frequency details are missing. To address this, we propose a Frequency-Driven Inverse Kernel Prediction network (FDIKP) that incorporates frequency-domain representations to enhance structural identifiability in kernel modeling. Given the superior discriminative capability of the frequency domain for blur modeling, we design a Dual-Branch Inverse Kernel Prediction (DIKP) strategy that improves the accuracy of kernel estimation while maintaining stability. Moreover, considering the limited number of predicted inverse kernels, we introduce a Position Adaptive Convolution (PAC) to enhance the adaptability of the deconvolution process. Finally, we propose a Dual-Domain Scale Recurrent Module (DSRM) to fuse deconvolution results and progressively improve deblurring quality from coarse to fine. Extensive experiments demonstrate that our method outperforms existing approaches. Code will be made publicly available.
Related papers
- FAST: Topology-Aware Frequency-Domain Distribution Matching for Coreset Selection [19.148841575715746]
Coreset selection compresses datasets into compact, representative subsets, reducing the energy and computational burden of training deep neural networks.<n>We propose FAST, the first DNN-free distribution-matching coreset selection framework.<n>FAST significantly outperforms state-of-the-art coreset selection methods across all evaluated benchmarks, achieving an average accuracy gain of 9.12%.
arXiv Detail & Related papers (2025-11-22T09:24:57Z) - Efficient Dual-domain Image Dehazing with Haze Prior Perception [17.18810808188725]
Transformer-based models exhibit strong global modeling capabilities in single-image dehazing, but their high computational cost limits real-time applicability.<n>We propose the Dark Channel Guided Frequency-aware Dehazing Network (DGFDNet), a novel dual-domain framework that performs physically guided degradation alignment.<n>Experiments on four benchmark haze datasets demonstrate that DGFDNet achieves state-of-the-art performance with superior robustness and real-time efficiency.
arXiv Detail & Related papers (2025-07-15T06:56:56Z) - Kernel Space Diffusion Model for Efficient Remote Sensing Pansharpening [8.756657890124766]
Kernel Space Diffusion Model (KSDiff) is a novel approach that leverages diffusion processes in a latent space to generate convolutional kernels enriched with global contextual information.<n> Experiments on three widely used datasets, including WorldView-3, GaoFen-2, and QuickBird, demonstrate the superior performance of KSDiff both qualitatively and quantitatively.
arXiv Detail & Related papers (2025-05-25T06:25:31Z) - FUSE: Label-Free Image-Event Joint Monocular Depth Estimation via Frequency-Decoupled Alignment and Degradation-Robust Fusion [63.87313550399871]
Image-event joint depth estimation methods leverage complementary modalities for robust perception, yet face challenges in generalizability.<n>We propose Self-supervised Transfer (PST) and FrequencyDe-coupled Fusion module (FreDF)<n>PST establishes cross-modal knowledge transfer through latent space alignment with image foundation models.<n>FreDF explicitly decouples high-frequency edge features from low-frequency structural components, resolving modality-specific frequency mismatches.
arXiv Detail & Related papers (2025-03-25T15:04:53Z) - Reward-Guided Iterative Refinement in Diffusion Models at Test-Time with Applications to Protein and DNA Design [87.58981407469977]
We propose a novel framework for inference-time reward optimization with diffusion models inspired by evolutionary algorithms.<n>Our approach employs an iterative refinement process consisting of two steps in each iteration: noising and reward-guided denoising.
arXiv Detail & Related papers (2025-02-20T17:48:45Z) - Uncertainty-Aware Unsupervised Image Deblurring with Deep Residual Prior [23.417096880297702]
Non-blind deblurring methods achieve decent performance under the accurate blur kernel assumption.
Hand-crafted prior, incorporating domain knowledge, generally performs well but may lead to poor performance when kernel (or induced) error is complex.
Data-driven prior, which excessively depends on the diversity and abundance of training data, is vulnerable to out-of-distribution blurs and images.
We propose an unsupervised semi-blind deblurring model which recovers the latent image from the blurry image and inaccurate blur kernel.
arXiv Detail & Related papers (2022-10-09T11:10:59Z) - Deep Constrained Least Squares for Blind Image Super-Resolution [36.71106982590893]
We tackle the problem of blind image super-resolution(SR) with a reformulated degradation model and two novel modules.
To be more specific, we first reformulate the degradation model such that the deblurring kernel estimation can be transferred into the low resolution space.
Our experiments demonstrate that the proposed method achieves better accuracy and visual improvements against state-of-the-art methods.
arXiv Detail & Related papers (2022-02-15T15:32:11Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Mutual Affine Network for Spatially Variant Kernel Estimation in Blind
Image Super-Resolution [130.32026819172256]
Existing blind image super-resolution (SR) methods mostly assume blur kernels are spatially invariant across the whole image.
This paper proposes a mutual affine network (MANet) for spatially variant kernel estimation.
arXiv Detail & Related papers (2021-08-11T16:11:17Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.