Knowledge Distillation for Image Restoration : Simultaneous Learning from Degraded and Clean Images
- URL: http://arxiv.org/abs/2501.09268v1
- Date: Thu, 16 Jan 2025 03:35:23 GMT
- Title: Knowledge Distillation for Image Restoration : Simultaneous Learning from Degraded and Clean Images
- Authors: Yongheng Zhang, Danfeng Yan,
- Abstract summary: We propose a Simultaneous Learning Knowledge Distillation (SLKD) framework tailored for model compression in image restoration tasks.
SLKD employs a dual-teacher, single-student architecture with two distinct learning strategies: Degradation Removal Learning (DRL) and Image Reconstruction Learning (IRL), simultaneously.
Experimental results across five datasets and three tasks demonstrate that SLKD achieves substantial reductions in FLOPs and parameters, exceeding 80%, while maintaining strong image restoration performance.
- Score: 0.0
- License:
- Abstract: Model compression through knowledge distillation has seen extensive application in classification and segmentation tasks. However, its potential in image-to-image translation, particularly in image restoration, remains underexplored. To address this gap, we propose a Simultaneous Learning Knowledge Distillation (SLKD) framework tailored for model compression in image restoration tasks. SLKD employs a dual-teacher, single-student architecture with two distinct learning strategies: Degradation Removal Learning (DRL) and Image Reconstruction Learning (IRL), simultaneously. In DRL, the student encoder learns from Teacher A to focus on removing degradation factors, guided by a novel BRISQUE extractor. In IRL, the student decoder learns from Teacher B to reconstruct clean images, with the assistance of a proposed PIQE extractor. These strategies enable the student to learn from degraded and clean images simultaneously, ensuring high-quality compression of image restoration models. Experimental results across five datasets and three tasks demonstrate that SLKD achieves substantial reductions in FLOPs and parameters, exceeding 80\%, while maintaining strong image restoration performance.
Related papers
- Soft Knowledge Distillation with Multi-Dimensional Cross-Net Attention for Image Restoration Models Compression [0.0]
Transformer-based encoder-decoder models have achieved remarkable success in image-to-image transfer tasks.
However, their high computational complexity-manifested in elevated FLOPs and parameter counts-limits their application in real-world scenarios.
We propose a Soft Knowledge Distillation (SKD) strategy that incorporates a Multi-dimensional Cross-net Attention (MCA) mechanism for compressing image restoration models.
arXiv Detail & Related papers (2025-01-16T06:25:56Z) - Dynamic Contrastive Knowledge Distillation for Efficient Image Restoration [17.27061613884289]
We propose a novel dynamic contrastive knowledge distillation (DCKD) framework for image restoration.
Specifically, we introduce dynamic contrastive regularization to perceive the student's learning state.
We also propose a distribution mapping module to extract and align the pixel-level category distribution of the teacher and student models.
arXiv Detail & Related papers (2024-12-12T05:01:17Z) - UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation [50.27688690379488]
Existing unified methods treat multi-degradation image restoration as a multi-task learning problem.
We propose a universal image restoration framework based on multiple low-rank adapters (LoRA) from multi-domain transfer learning.
Our framework leverages the pre-trained generative model as the shared component for multi-degradation restoration and transfers it to specific degradation image restoration tasks.
arXiv Detail & Related papers (2024-09-30T11:16:56Z) - Perceive-IR: Learning to Perceive Degradation Better for All-in-One Image Restoration [33.163161549726446]
Perceive-IR is an all-in-one image restorer designed to achieve fine-grained quality control.
In the prompt learning stage, we leverage prompt learning to acquire a fine-grained quality perceiver capable of distinguishing three-tier quality levels.
For the restoration stage, a semantic guidance module and compact feature extraction are proposed to further promote the restoration process.
arXiv Detail & Related papers (2024-08-28T17:58:54Z) - ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model [49.587821411012705]
We propose ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model.
It distills the knowledge from a large teacher CLIP model into a smaller student model, ensuring comparable performance with significantly reduced parameters.
EduAttention explores the cross-relationships between text features extracted by the teacher model and image features extracted by the student model.
arXiv Detail & Related papers (2024-08-08T01:12:21Z) - Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models [14.25759541950917]
This work leverages a capable vision-language model and a synthetic degradation pipeline to learn image restoration in the wild (wild IR)
Our base diffusion model is the image restoration SDE (IR-SDE)
arXiv Detail & Related papers (2024-04-15T12:34:21Z) - InstructIR: High-Quality Image Restoration Following Human Instructions [61.1546287323136]
We present the first approach that uses human-written instructions to guide the image restoration model.
Our method, InstructIR, achieves state-of-the-art results on several restoration tasks.
arXiv Detail & Related papers (2024-01-29T18:53:33Z) - DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [70.46245698746874]
We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks.
DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image content.
In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results.
For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details.
arXiv Detail & Related papers (2023-08-29T07:11:52Z) - Wide & deep learning for spatial & intensity adaptive image restoration [16.340992967330603]
We propose an ingenious and efficient multi-frame image restoration network (DparNet) with wide & deep architecture.
The degradation prior is directly learned from degraded images in form of key degradation parameter matrix.
The wide & deep architecture in DparNet enables the learned parameters to directly modulate the final restoring results.
arXiv Detail & Related papers (2023-05-30T03:24:09Z) - Restoring Vision in Hazy Weather with Hierarchical Contrastive Learning [53.85892601302974]
We propose an effective image dehazing method named Hierarchical Contrastive Dehazing (HCD)
HCD consists of a hierarchical dehazing network (HDN) and a novel hierarchical contrastive loss (HCL)
arXiv Detail & Related papers (2022-12-22T03:57:06Z) - Knowledge Distillation based Degradation Estimation for Blind
Super-Resolution [146.0988597062618]
Blind image super-resolution (Blind-SR) aims to recover a high-resolution (HR) image from its corresponding low-resolution (LR) input image with unknown degradations.
It is infeasible to provide concrete labels of multiple degradation combinations to supervise the degradation estimator training.
We propose a knowledge distillation based implicit degradation estimator network (KD-IDE) and an efficient SR network.
arXiv Detail & Related papers (2022-11-30T11:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.