Sharing the Learned Knowledge-base to Estimate Convolutional Filter Parameters for Continual Image Restoration
- URL: http://arxiv.org/abs/2511.05421v1
- Date: Fri, 07 Nov 2025 16:52:42 GMT
- Title: Sharing the Learned Knowledge-base to Estimate Convolutional Filter Parameters for Continual Image Restoration
- Authors: Aupendu Kar, Krishnendu Ghosh, Prabir Kumar Biswas,
- Abstract summary: We propose a simple modification of the convolution layer to adapt the knowledge from previous restoration tasks without touching the main backbone architecture.<n>Unlike other approaches, we demonstrate that our model can increase the number of trainable parameters without significantly increasing computational overhead or inference time.
- Score: 7.116541784404478
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Continual learning is an emerging topic in the field of deep learning, where a model is expected to learn continuously for new upcoming tasks without forgetting previous experiences. This field has witnessed numerous advancements, but few works have been attempted in the direction of image restoration. Handling large image sizes and the divergent nature of various degradation poses a unique challenge in the restoration domain. However, existing works require heavily engineered architectural modifications for new task adaptation, resulting in significant computational overhead. Regularization-based methods are unsuitable for restoration, as different restoration challenges require different kinds of feature processing. In this direction, we propose a simple modification of the convolution layer to adapt the knowledge from previous restoration tasks without touching the main backbone architecture. Therefore, it can be seamlessly applied to any deep architecture without any structural modifications. Unlike other approaches, we demonstrate that our model can increase the number of trainable parameters without significantly increasing computational overhead or inference time. Experimental validation demonstrates that new restoration tasks can be introduced without compromising the performance of existing tasks. We also show that performance on new restoration tasks improves by adapting the knowledge from the knowledge base created by previous restoration tasks. The code is available at https://github.com/aupendu/continual-restore.
Related papers
- From Physical Degradation Models to Task-Aware All-in-One Image Restoration [44.45223512440674]
All-in-one image restoration aims to adaptively handle multiple restoration tasks with a single trained model.<n>We adopt a physical degradation modeling perspective and predict a task-aware inverse degradation operator for efficient all-in-one image restoration.
arXiv Detail & Related papers (2026-01-15T08:47:10Z) - Image Restoration via Multi-domain Learning [8.909636477353695]
We introduce a novel restoration framework, which integrates multi-domain learning into Transformer.<n>Specifically, in Token Mixer, we propose a Spatial-Wavelet-Fourier multi-domain structure that facilitates local-region-global multi-receptive field modeling.<n>In Feed-Forward Network, we incorporate multi-scale learning to fuse multi-domain features at different resolutions.
arXiv Detail & Related papers (2025-05-07T04:14:51Z) - Cat-AIR: Content and Task-Aware All-in-One Image Restoration [50.46278224313221]
Cat-AIR is a novel framework for textbfAnd textbfTask-aware framework for textbfImage textbfRestoration.<n>Cat-AIR incorporates an alternating spatial-channel attention mechanism that adaptively balances the local and global information for different tasks.<n>Experiments demonstrate that Cat-AIR achieves state-of-the-art results across a wide range of restoration tasks, requiring fewer FLOPs than previous methods.
arXiv Detail & Related papers (2025-03-23T03:25:52Z) - UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity [79.90839080916913]
We present our UniRestorer with improved restoration performance.<n>Specifically, we perform hierarchical clustering on degradation space, and train a multi-granularity mixture-of-experts (MoE) restoration model.<n>In contrast to existing degradation-agnostic and -aware methods, UniRestorer can leverage degradation estimation to benefit degradation specific restoration.
arXiv Detail & Related papers (2024-12-28T14:09:08Z) - Restorer: Removing Multi-Degradation with All-Axis Attention and Prompt Guidance [12.066756224383827]
textbfRestorer is a novel Transformer-based all-in-one image restoration model.
It can handle composite degradation in real-world scenarios without requiring additional training.
It is efficient during inference, suggesting the potential in real-world applications.
arXiv Detail & Related papers (2024-06-18T13:18:32Z) - AdaIR: Exploiting Underlying Similarities of Image Restoration Tasks with Adapters [57.62742271140852]
AdaIR is a novel framework that enables low storage cost and efficient training without sacrificing performance.
AdaIR requires solely the training of lightweight, task-specific modules, ensuring a more efficient storage and training regimen.
arXiv Detail & Related papers (2024-04-17T15:31:06Z) - Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration [50.81374327480445]
We introduce a novel concept positing that intricate image degradation can be represented in terms of elementary degradation.
We propose the Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal components: a Width Adaptive Backbone (WAB) and a Width Selector (WS)
The proposed U-WADN achieves better performance while simultaneously reducing up to 32.3% of FLOPs and providing approximately 15.7% real-time acceleration.
arXiv Detail & Related papers (2024-01-24T04:25:12Z) - Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - SPIRE: Semantic Prompt-Driven Image Restoration [66.26165625929747]
We develop SPIRE, a Semantic and restoration Prompt-driven Image Restoration framework.
Our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength.
Our experiments demonstrate the superior restoration performance of SPIRE compared to the state of the arts.
arXiv Detail & Related papers (2023-12-18T17:02:30Z) - Heterogeneous Continual Learning [88.53038822561197]
We propose a novel framework to tackle the continual learning (CL) problem with changing network architectures.
We build on top of the distillation family of techniques and modify it to a new setting where a weaker model takes the role of a teacher.
We also propose Quick Deep Inversion (QDI) to recover prior task visual features to support knowledge transfer.
arXiv Detail & Related papers (2023-06-14T15:54:42Z) - LIRA: Lifelong Image Restoration from Unknown Blended Distortions [33.91806781681914]
We propose a novel lifelong image restoration problem for blended distortions.
We first design a base fork-join model in which multiple pre-trained expert models specializing in individual distortion removal task work cooperatively.
We develop a neural growing strategy where the previously trained model can incorporate a new expert branch and continually accumulate new knowledge.
arXiv Detail & Related papers (2020-08-19T03:35:45Z) - Blind Image Restoration without Prior Knowledge [0.22940141855172028]
We present the Self-Normalization Side-Chain (SCNC), a novel approach to blind universal restoration in which no prior knowledge of the degradation is needed.
The SCNC can be added to any existing CNN topology, and is trained along with the rest of the network in an end-to-end manner.
arXiv Detail & Related papers (2020-03-03T19:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.