HiCD: Change Detection in Quality-Varied Images via Hierarchical
Correlation Distillation
- URL: http://arxiv.org/abs/2401.10752v1
- Date: Fri, 19 Jan 2024 15:21:51 GMT
- Title: HiCD: Change Detection in Quality-Varied Images via Hierarchical
Correlation Distillation
- Authors: Chao Pang, Xingxing Weng, Jiang Wu, Qiang Wang, and Gui-Song Xia
- Abstract summary: We introduce an innovative training strategy grounded in knowledge distillation.
The core idea revolves around leveraging task knowledge acquired from high-quality image pairs to guide the model's learning.
We develop a hierarchical correlation distillation approach (involving self-correlation, cross-correlation, and global correlation)
- Score: 40.03785896317387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advanced change detection techniques primarily target image pairs of equal
and high quality. However, variations in imaging conditions and platforms
frequently lead to image pairs with distinct qualities: one image being
high-quality, while the other being low-quality. These disparities in image
quality present significant challenges for understanding image pairs
semantically and extracting change features, ultimately resulting in a notable
decline in performance. To tackle this challenge, we introduce an innovative
training strategy grounded in knowledge distillation. The core idea revolves
around leveraging task knowledge acquired from high-quality image pairs to
guide the model's learning process when dealing with image pairs that exhibit
differences in quality. Additionally, we develop a hierarchical correlation
distillation approach (involving self-correlation, cross-correlation, and
global correlation). This approach compels the student model to replicate the
correlations inherent in the teacher model, rather than focusing solely on
individual features. This ensures effective knowledge transfer while
maintaining the student model's training flexibility.
Related papers
- Scale Contrastive Learning with Selective Attentions for Blind Image Quality Assessment [15.235786583920062]
Blind image quality assessment (BIQA) serves as a fundamental task in computer vision, yet it often fails to consistently align with human subjective perception.
Recent advances show that multi-scale evaluation strategies are promising due to their ability to replicate the hierarchical structure of human vision.
This paper addresses two primary challenges: the significant redundancy of information across different scales, and the confusion caused by combining features from these scales.
arXiv Detail & Related papers (2024-11-13T20:17:30Z) - Local Manifold Learning for No-Reference Image Quality Assessment [68.9577503732292]
We propose an innovative framework that integrates local manifold learning with contrastive learning for No-Reference Image Quality Assessment (NR-IQA)
Our approach demonstrates a better performance compared to state-of-the-art methods in 7 standard datasets.
arXiv Detail & Related papers (2024-06-27T15:14:23Z) - Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination [13.238373528922194]
We propose a novel technique for post-processing Consistency-based generated images, enhancing their perceptual quality.
Our approach utilizes a joint classifier-discriminator model, in which both portions are trained adversarially.
By employing example-specific projected gradient under the guidance of this joint machine, we refine synthesized images and achieve an improved FID scores on the ImageNet 64x64 dataset.
arXiv Detail & Related papers (2024-05-25T14:53:52Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Counterfactual Image Editing [54.21104691749547]
Counterfactual image editing is an important task in generative AI, which asks how an image would look if certain features were different.
We formalize the counterfactual image editing task using formal language, modeling the causal relationships between latent generative factors and images.
We develop an efficient algorithm to generate counterfactual images by leveraging neural causal models.
arXiv Detail & Related papers (2024-02-07T20:55:39Z) - QGFace: Quality-Guided Joint Training For Mixed-Quality Face Recognition [2.8519768339207356]
We propose a novel quality-guided joint training approach for mixed-quality face recognition.
Based on quality partition, classification-based method is employed for HQ data learning.
For the LQ images which lack identity information, we learn them with self-supervised image-image contrastive learning.
arXiv Detail & Related papers (2023-12-29T06:56:22Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.