Attention to detail: inter-resolution knowledge distillation
- URL: http://arxiv.org/abs/2401.06010v1
- Date: Thu, 11 Jan 2024 16:16:20 GMT
- Title: Attention to detail: inter-resolution knowledge distillation
- Authors: Roc\'io del Amor, Julio Silva-Rodr\'iguez, Adri\'an Colomer and Valery
Naranjo
- Abstract summary: Development of computer vision solutions for gigapixel images in digital pathology is hampered by the large size of whole slide images.
Recent literature has proposed using knowledge distillation to enhance the model performance at reduced image resolutions.
In this work, we propose to distill this information by incorporating attention maps during training.
- Score: 1.927195358774599
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of computer vision solutions for gigapixel images in digital
pathology is hampered by significant computational limitations due to the large
size of whole slide images. In particular, digitizing biopsies at high
resolutions is a time-consuming process, which is necessary due to the
worsening results from the decrease in image detail. To alleviate this issue,
recent literature has proposed using knowledge distillation to enhance the
model performance at reduced image resolutions. In particular, soft labels and
features extracted at the highest magnification level are distilled into a
model that takes lower-magnification images as input. However, this approach
fails to transfer knowledge about the most discriminative image regions in the
classification process, which may be lost when the resolution is decreased. In
this work, we propose to distill this information by incorporating attention
maps during training. In particular, our formulation leverages saliency maps of
the target class via grad-CAMs, which guides the lower-resolution Student model
to match the Teacher distribution by minimizing the l2 distance between them.
Comprehensive experiments on prostate histology image grading demonstrate that
the proposed approach substantially improves the model performance across
different image resolutions compared to previous literature.
Related papers
- Super resolution of histopathological frozen sections via deep learning
preserving tissue structure [0.0]
We present a new approach to super resolution for histopathology frozen sections.
Our deep-learning architecture focuses on learning the error between interpolated images and real images.
In comparison to existing methods, we obtained significant improvements in terms of Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR)
Our approach has a great potential in providing more-rapid frozen-section imaging, with less scanning, while preserving the high resolution in the imaged sample.
arXiv Detail & Related papers (2023-10-17T09:52:54Z) - ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with
Diffusion Models [126.35334860896373]
We investigate the capability of generating images from pre-trained diffusion models at much higher resolutions than the training image sizes.
Existing works for higher-resolution generation, such as attention-based and joint-diffusion approaches, cannot well address these issues.
We propose a simple yet effective re-dilation that can dynamically adjust the convolutional perception field during inference.
arXiv Detail & Related papers (2023-10-11T17:52:39Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Training-free Diffusion Model Adaptation for Variable-Sized
Text-to-Image Synthesis [45.19847146506007]
Diffusion models (DMs) have recently gained attention with state-of-the-art performance in text-to-image synthesis.
This paper focuses on adapting text-to-image diffusion models to handle variety while maintaining visual fidelity.
arXiv Detail & Related papers (2023-06-14T17:23:07Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - Enhancing Fine-Grained Classification for Low Resolution Images [97.82441158440527]
Low resolution images suffer from the inherent challenge of limited information content and the absence of fine details useful for sub-category classification.
This research proposes a novel attribute-assisted loss, which utilizes ancillary information to learn discriminative features for classification.
The proposed loss function enables a model to learn class-specific discriminative features, while incorporating attribute-level separability.
arXiv Detail & Related papers (2021-05-01T13:19:02Z) - Resolution-Based Distillation for Efficient Histology Image
Classification [29.603903713682275]
This paper proposes a novel deep learning-based methodology for improving the computational efficiency of histology image classification.
The proposed approach is robust when used with images that have reduced input resolution and can be trained effectively with limited labeled data.
We evaluate our approach on two histology image datasets associated with celiac disease (CD) and lung adenocarcinoma (LUAD)
arXiv Detail & Related papers (2021-01-11T20:00:35Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.