SDAKD: Student Discriminator Assisted Knowledge Distillation for Super-Resolution Generative Adversarial Networks
- URL: http://arxiv.org/abs/2510.03870v1
- Date: Sat, 04 Oct 2025 16:40:18 GMT
- Title: SDAKD: Student Discriminator Assisted Knowledge Distillation for Super-Resolution Generative Adversarial Networks
- Authors: Nikolaos Kaparinos, Vasileios Mezaris,
- Abstract summary: Student Discriminator Assisted Knowledge Distillation (SDAKD) is a novel GAN distillation methodology that introduces a student discriminator to mitigate this capacity mismatch.<n>Our experiments demonstrate consistent improvements over the baselines and SOTA GAN knowledge distillation methods.
- Score: 5.972927416266618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) achieve excellent performance in generative tasks, such as image super-resolution, but their computational requirements make difficult their deployment on resource-constrained devices. While knowledge distillation is a promising research direction for GAN compression, effectively training a smaller student generator is challenging due to the capacity mismatch between the student generator and the teacher discriminator. In this work, we propose Student Discriminator Assisted Knowledge Distillation (SDAKD), a novel GAN distillation methodology that introduces a student discriminator to mitigate this capacity mismatch. SDAKD follows a three-stage training strategy, and integrates an adapted feature map distillation approach in its last two training stages. We evaluated SDAKD on two well-performing super-resolution GANs, GCFSR and Real-ESRGAN. Our experiments demonstrate consistent improvements over the baselines and SOTA GAN knowledge distillation methods. The SDAKD source code will be made openly available upon acceptance of the paper.
Related papers
- Relative Difficulty Distillation for Semantic Segmentation [54.76143187709987]
We propose a pixel-level KD paradigm for semantic segmentation named Relative Difficulty Distillation (RDD)
RDD allows the teacher network to provide effective guidance on learning focus without additional optimization goals.
Our research showcases that RDD can integrate with existing KD methods to improve their upper performance bound.
arXiv Detail & Related papers (2024-07-04T08:08:25Z) - Grouped Knowledge Distillation for Deep Face Recognition [53.57402723008569]
The light-weight student network has difficulty fitting the target logits due to its low model capacity.
We propose a Grouped Knowledge Distillation (GKD) that retains the Primary-KD and Binary-KD but omits Secondary-KD in the ultimate KD loss calculation.
arXiv Detail & Related papers (2023-04-10T09:04:38Z) - Discriminator-Cooperated Feature Map Distillation for GAN Compression [69.86835014810714]
We present an inventive discriminator-cooperated distillation, abbreviated as DCD, towards refining better feature maps from the generator.
Our DCD shows superior results compared with existing GAN compression methods.
arXiv Detail & Related papers (2022-12-29T03:50:27Z) - Exploring Content Relationships for Distilling Efficient GANs [69.86835014810714]
This paper proposes a content relationship distillation (CRD) to tackle the over- parameterized generative adversarial networks (GANs)
In contrast to traditional instance-level distillation, we design a novel GAN compression oriented knowledge by slicing the contents of teacher outputs into multiple fine-grained granularities.
Built upon our proposed content-level distillation, we also deploy an online teacher discriminator, which keeps updating when co-trained with the teacher generator and keeps freezing when co-trained with the student generator for better adversarial training.
arXiv Detail & Related papers (2022-12-21T15:38:12Z) - Feature-domain Adaptive Contrastive Distillation for Efficient Single
Image Super-Resolution [3.2453621806729234]
CNN-based SISR has numerous parameters and high computational cost to achieve better performance.
Knowledge Distillation (KD) transfers teacher's useful knowledge to student.
We propose a feature-domain adaptive contrastive distillation (FACD) method for efficiently training lightweight student SISR networks.
arXiv Detail & Related papers (2022-11-29T06:24:14Z) - Mind the Gap in Distilling StyleGANs [100.58444291751015]
StyleGAN family is one of the most popular Generative Adversarial Networks (GANs) for unconditional generation.
This paper provides a comprehensive study of distilling from the popular StyleGAN-like architecture.
arXiv Detail & Related papers (2022-08-18T14:18:29Z) - Parameter-Efficient and Student-Friendly Knowledge Distillation [83.56365548607863]
We present a parameter-efficient and student-friendly knowledge distillation method, namely PESF-KD, to achieve efficient and sufficient knowledge transfer.
Experiments on a variety of benchmarks show that PESF-KD can significantly reduce the training cost while obtaining competitive results compared to advanced online distillation methods.
arXiv Detail & Related papers (2022-05-28T16:11:49Z) - ErGAN: Generative Adversarial Networks for Entity Resolution [8.576633582363202]
A major challenge in learning-based entity resolution is how to reduce the label cost for training.
We propose a novel deep learning method, called ErGAN, to address the challenge.
We have conducted extensive experiments to empirically verify the labeling and learning efficiency of ErGAN.
arXiv Detail & Related papers (2020-12-18T01:33:58Z) - P-KDGAN: Progressive Knowledge Distillation with GANs for One-class
Novelty Detection [24.46562699161406]
One-class novelty detection is to identify anomalous instances that do not conform to the expected normal instances.
Deep neural networks are too over- parameterized to deploy on resource-limited devices.
Progressive Knowledge Distillation with GANs (PKDGAN) is proposed to learn compact and fast novelty detection networks.
arXiv Detail & Related papers (2020-07-14T10:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.