Attention-guided Feature Distillation for Semantic Segmentation
- URL: http://arxiv.org/abs/2403.05451v1
- Date: Fri, 8 Mar 2024 16:57:47 GMT
- Title: Attention-guided Feature Distillation for Semantic Segmentation
- Authors: Amir M. Mansourian, Arya Jalali, Rozhan Ahmadi, Shohreh Kasaei
- Abstract summary: The proposed Attention-guided Feature Distillation (AttnFD) method, em-ploys the Convolutional Block Attention Module (CBAM)
AttnFD demonstrates outstanding performance in semantic segmentation, achieving state-of-the-art results in terms of mean Intersection over Union (mIoU) on the PascalVoc 2012 and Cityscapes datasets.
- Score: 9.115508086522887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In contrast to existing complex methodologies commonly employed for
distilling knowledge from a teacher to a student, the pro-posed method
showcases the efficacy of a simple yet powerful method for utilizing refined
feature maps to transfer attention. The proposed method has proven to be
effective in distilling rich information, outperforming existing methods in
semantic segmentation as a dense prediction task. The proposed Attention-guided
Feature Distillation (AttnFD) method, em-ploys the Convolutional Block
Attention Module (CBAM), which refines feature maps by taking into account both
channel-specific and spatial information content. By only using the Mean
Squared Error (MSE) loss function between the refined feature maps of the
teacher and the student,AttnFD demonstrates outstanding performance in semantic
segmentation, achieving state-of-the-art results in terms of mean Intersection
over Union (mIoU) on the PascalVoc 2012 and Cityscapes datasets. The Code is
available at https://github.com/AmirMansurian/AttnFD.
Related papers
- I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation [1.433758865948252]
This paper proposes a new knowledge distillation method tailored for image semantic segmentation, termed Intra- and Inter-Class Knowledge Distillation (I2CKD)
The focus of this method is on capturing and transferring knowledge between the intermediate layers of teacher (cumbersome model) and student (compact model)
arXiv Detail & Related papers (2024-03-27T12:05:22Z) - Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation [51.66997548477913]
We propose a novel feature-level consistency learning framework named Density-Descending Feature Perturbation (DDFP)
Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore.
The proposed DDFP outperforms other designs on feature-level perturbations and shows state of the art performances on both Pascal VOC and Cityscapes dataset.
arXiv Detail & Related papers (2024-03-11T06:59:05Z) - Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Knowledge Diffusion for Distillation [53.908314960324915]
The representation gap between teacher and student is an emerging topic in knowledge distillation (KD)
We state that the essence of these methods is to discard the noisy information and distill the valuable information in the feature.
We propose a novel KD method dubbed DiffKD, to explicitly denoise and match features using diffusion models.
arXiv Detail & Related papers (2023-05-25T04:49:34Z) - Distilling Inter-Class Distance for Semantic Segmentation [17.76592932725305]
We propose an Inter-class Distance Distillation (IDD) method to transfer the inter-class distance in the feature space from the teacher network to the student network.
Our method is helpful to improve the accuracy of semantic segmentation models and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-05-07T13:13:55Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Visualization of Supervised and Self-Supervised Neural Networks via
Attribution Guided Factorization [87.96102461221415]
We develop an algorithm that provides per-class explainability.
In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization.
arXiv Detail & Related papers (2020-12-03T18:48:39Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.