SemiCD-VL: Visual-Language Model Guidance Makes Better Semi-supervised Change Detector
- URL: http://arxiv.org/abs/2405.04788v4
- Date: Sun, 20 Oct 2024 15:08:01 GMT
- Title: SemiCD-VL: Visual-Language Model Guidance Makes Better Semi-supervised Change Detector
- Authors: Kaiyu Li, Xiangyong Cao, Yupeng Deng, Jiayi Song, Junmin Liu, Deyu Meng, Zhi Wang,
- Abstract summary: Change Detection (CD) aims to identify pixels with semantic changes between images.
We propose a VLM guidance-based semi-supervised CD method, namely SemiCD-VL.
In this paper, we propose a VLM-based mixed change event generation (CEG) strategy to yield pseudo labels for unlabeled CD data.
- Score: 43.199838967666714
- License:
- Abstract: Change Detection (CD) aims to identify pixels with semantic changes between images. However, annotating massive numbers of pixel-level images is labor-intensive and costly, especially for multi-temporal images, which require pixel-wise comparisons by human experts. Considering the excellent performance of visual language models (VLMs) for zero-shot, open-vocabulary, etc. with prompt-based reasoning, it is promising to utilize VLMs to make better CD under limited labeled data. In this paper, we propose a VLM guidance-based semi-supervised CD method, namely SemiCD-VL. The insight of SemiCD-VL is to synthesize free change labels using VLMs to provide additional supervision signals for unlabeled data. However, almost all current VLMs are designed for single-temporal images and cannot be directly applied to bi- or multi-temporal images. Motivated by this, we first propose a VLM-based mixed change event generation (CEG) strategy to yield pseudo labels for unlabeled CD data. Since the additional supervised signals provided by these VLM-driven pseudo labels may conflict with the pseudo labels from the consistency regularization paradigm (e.g. FixMatch), we propose the dual projection head for de-entangling different signal sources. Further, we explicitly decouple the bi-temporal images semantic representation through two auxiliary segmentation decoders, which are also guided by VLM. Finally, to make the model more adequately capture change representations, we introduce metric-aware supervision by feature-level contrastive loss in auxiliary branches. Extensive experiments show the advantage of SemiCD-VL. For instance, SemiCD-VL improves the FixMatch baseline by +5.3 IoU on WHU-CD and by +2.4 IoU on LEVIR-CD with 5% labels. In addition, our CEG strategy, in an un-supervised manner, can achieve performance far superior to state-of-the-art un-supervised CD methods.
Related papers
- AVG-LLaVA: A Large Multimodal Model with Adaptive Visual Granularity [85.44800864697464]
We introduce AVG-LLaVA, an LMM that can adaptively select the appropriate visual granularity based on the input image and instruction.
We show that AVG-LLaVA achieves superior performance across 11 benchmarks, as well as significantly reduces the number of visual tokens and speeds up inference.
arXiv Detail & Related papers (2024-09-20T10:50:21Z) - SSLChange: A Self-supervised Change Detection Framework Based on Domain Adaptation [13.186214312979912]
SSLChange is a self-supervised contrastive framework for change detection.
It accomplishes self-learning only by taking a single-temporal sample.
It can be flexibly transferred to main-stream CD baselines.
arXiv Detail & Related papers (2024-05-28T14:34:51Z) - Pixel-Level Change Detection Pseudo-Label Learning for Remote Sensing Change Captioning [28.3763053922823]
Methods for Remote Sensing Image Change Captioning (RSICC) perform well in simple scenes but exhibit poorer performance in complex scenes.
We believe pixel-level CD is significant for describing the differences between images through language.
Our method achieves state-of-the-art performance and validate that learning pixel-level CD pseudo-labels significantly contributes to change captioning.
arXiv Detail & Related papers (2023-12-23T17:58:48Z) - TransY-Net:Learning Fully Transformer Networks for Change Detection of
Remote Sensing Images [64.63004710817239]
We propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD.
It improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner.
Our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks.
arXiv Detail & Related papers (2023-10-22T07:42:19Z) - UCDFormer: Unsupervised Change Detection Using a Transformer-driven
Image Translation [20.131754484570454]
Change detection (CD) by comparing two bi-temporal images is a crucial task in remote sensing.
We propose a change detection with domain shift setting for remote sensing images.
We present a novel unsupervised CD method using a light-weight transformer, called UCDFormer.
arXiv Detail & Related papers (2023-08-02T13:39:08Z) - Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection [9.229278131265124]
Weakly-supervised change detection (WSCD) aims to detect pixel-level changes with only image-level annotations.
We propose two components: a Dilated Prior (DP) decoder and a Label Gated (LG) constraint.
Our proposed TransWCD and TransWCD-DL achieve significant +6.33% and +9.55% F1 score improvements over the state-of-the-art methods on the WHU-CD dataset.
arXiv Detail & Related papers (2023-07-20T13:16:10Z) - Revisiting Consistency Regularization for Semi-supervised Change
Detection in Remote Sensing Images [60.89777029184023]
We propose a semi-supervised CD model in which we formulate an unsupervised CD loss in addition to the supervised Cross-Entropy (CE) loss.
Experiments conducted on two publicly available CD datasets show that the proposed semi-supervised CD method can reach closer to the performance of supervised CD.
arXiv Detail & Related papers (2022-04-18T17:59:01Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Semantics-Guided Clustering with Deep Progressive Learning for
Semi-Supervised Person Re-identification [58.01834972099855]
Person re-identification (re-ID) requires one to match images of the same person across camera views.
We propose a novel framework of Semantics-Guided Clustering with Deep Progressive Learning (SGC-DPL) to jointly exploit the above data.
Our approach is able to augment the labeled training data in the semi-supervised setting.
arXiv Detail & Related papers (2020-10-02T18:02:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.