AdaSemiCD: An Adaptive Semi-Supervised Change Detection Method Based on   Pseudo-Label Evaluation
        - URL: http://arxiv.org/abs/2411.07758v1
 - Date: Tue, 12 Nov 2024 12:35:34 GMT
 - Title: AdaSemiCD: An Adaptive Semi-Supervised Change Detection Method Based on   Pseudo-Label Evaluation
 - Authors: Ran Lingyan, Wen Dongcheng, Zhuo Tao, Zhang Shizhou, Zhang Xiuwei, Zhang Yanning, 
 - Abstract summary: We present an adaptive dynamic semi-supervised learning method, AdaCD, to improve the use of pseudo-labels and optimize the training process.
 Experimental results from LEVIR-CD, WHU-CD, and CDD datasets validate the efficacy and universality of our proposed adaptive training framework.
 - Score: 0.0
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Change Detection (CD) is an essential field in remote sensing, with a primary focus on identifying areas of change in bi-temporal image pairs captured at varying intervals of the same region by a satellite. The data annotation process for the CD task is both time-consuming and labor-intensive. To make better use of the scarce labeled data and abundant unlabeled data, we present an adaptive dynamic semi-supervised learning method, AdaSemiCD, to improve the use of pseudo-labels and optimize the training process. Initially, due to the extreme class imbalance inherent in CD, the model is more inclined to focus on the background class, and it is easy to confuse the boundary of the target object. Considering these two points, we develop a measurable evaluation metric for pseudo-labels that enhances the representation of information entropy by class rebalancing and amplification of confusing areas to give a larger weight to prospects change objects. Subsequently, to enhance the reliability of sample-wise pseudo-labels, we introduce the AdaFusion module, which is capable of dynamically identifying the most uncertain region and substituting it with more trustworthy content. Lastly, to ensure better training stability, we introduce the AdaEMA module, which updates the teacher model using only batches of trusted samples. Experimental results from LEVIR-CD, WHU-CD, and CDD datasets validate the efficacy and universality of our proposed adaptive training framework. 
 
       
      
        Related papers
        - TrajSSL: Trajectory-Enhanced Semi-Supervised 3D Object Detection [59.498894868956306]
Pseudo-labeling approaches to semi-supervised learning adopt a teacher-student framework.
We leverage pre-trained motion-forecasting models to generate object trajectories on pseudo-labeled data.
Our approach improves pseudo-label quality in two distinct manners.
arXiv  Detail & Related papers  (2024-09-17T05:35:00Z) - Exploring Test-Time Adaptation for Object Detection in Continually   Changing Environments [20.307151769610087]
Continual Test-Time Adaptation (CTTA) has emerged as a promising technique to gradually adapt a source-trained model to continually changing target domains.<n>We present AMROD, featuring three core components, to tackle these challenges for detection models in CTTA scenarios.<n>We demonstrate the effectiveness of AMROD on four CTTA object detection tasks, where AMROD outperforms existing methods.
arXiv  Detail & Related papers  (2024-06-24T08:30:03Z) - Adaptive Bidirectional Displacement for Semi-Supervised Medical Image   Segmentation [11.195959019678314]
Consistency learning is a central strategy to tackle unlabeled data in semi-supervised medical image segmentation.
In this paper, we propose an Adaptive Bidirectional Displacement approach to solve the above challenge.
arXiv  Detail & Related papers  (2024-05-01T08:17:43Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv  Detail & Related papers  (2023-08-31T05:05:53Z) - Consistency Regularization for Generalizable Source-free Domain
  Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv  Detail & Related papers  (2023-08-03T07:45:53Z) - Self Correspondence Distillation for End-to-End Weakly-Supervised
  Semantic Segmentation [13.623713806739271]
We propose a novel Self Correspondence Distillation (SCD) method to refine pseudo-labels without introducing external supervision.
In addition, we design a Variation-aware Refine Module to enhance the local consistency of pseudo-labels.
Our method significantly outperforms other state-of-the-art methods.
arXiv  Detail & Related papers  (2023-02-27T13:46:40Z) - Predicting Class Distribution Shift for Reliable Domain Adaptive Object
  Detection [2.5193191501662144]
Unsupervised Domain Adaptive Object Detection (UDA-OD) uses unlabelled data to improve the reliability of robotic vision systems in open-world environments.
Previous approaches to UDA-OD based on self-training have been effective in overcoming changes in the general appearance of images.
We propose a framework for explicitly addressing class distribution shift to improve pseudo-label reliability in self-training.
arXiv  Detail & Related papers  (2023-02-13T00:46:34Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
  Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv  Detail & Related papers  (2021-04-19T08:46:08Z) - Adaptive Consistency Regularization for Semi-Supervised Transfer
  Learning [31.66745229673066]
We consider semi-supervised learning and transfer learning jointly, leading to a more practical and competitive paradigm.
To better exploit the value of both pre-trained weights and unlabeled target examples, we introduce adaptive consistency regularization.
Our proposed adaptive consistency regularization outperforms state-of-the-art semi-supervised learning techniques such as Pseudo Label, Mean Teacher, and MixMatch.
arXiv  Detail & Related papers  (2021-03-03T05:46:39Z) - Selective Pseudo-Labeling with Reinforcement Learning for
  Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv  Detail & Related papers  (2020-12-07T03:37:38Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
  Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
 Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv  Detail & Related papers  (2020-07-21T13:27:09Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.