Unsupervised Self-training Algorithm Based on Deep Learning for Optical
Aerial Images Change Detection
- URL: http://arxiv.org/abs/2010.07469v2
- Date: Thu, 22 Oct 2020 07:28:12 GMT
- Title: Unsupervised Self-training Algorithm Based on Deep Learning for Optical
Aerial Images Change Detection
- Authors: Yuan Zhou, Xiangrui Li
- Abstract summary: We present a novel unsupervised self-training algorithm (USTA) for optical aerial images change detection.
The whole process of the algorithm is an unsupervised process without manually marked labels.
Experimental results on the real datasets demonstrate competitive performance of our proposed method.
- Score: 17.232244800511523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical aerial images change detection is an important task in earth
observation and has been extensively investigated in the past few decades.
Generally, the supervised change detection methods with superior performance
require a large amount of labeled training data which is obtained by manual
annotation with high cost. In this paper, we present a novel unsupervised
self-training algorithm (USTA) for optical aerial images change detection. The
traditional method such as change vector analysis is used to generate the
pseudo labels. We use these pseudo labels to train a well designed
convolutional neural network. The network is used as a teacher to classify the
original multitemporal images to generate another set of pseudo labels. Then
two set of pseudo labels are used to jointly train a student network with the
same structure as the teacher. The final change detection result can be
obtained by the trained student network. Besides, we design an image filter to
control the usage of change information in the pseudo labels in the training
process of the network. The whole process of the algorithm is an unsupervised
process without manually marked labels. Experimental results on the real
datasets demonstrate competitive performance of our proposed method.
Related papers
- Transductive Learning for Near-Duplicate Image Detection in Scanned Photo Collections [0.0]
This paper presents a comparative study of near-duplicate image detection techniques in a real-world use case scenario.
We propose a transductive learning approach that leverages state-of-the-art deep learning architectures such as convolutional neural networks (CNNs) and Vision Transformers (ViTs)
The results show that the proposed approach outperforms the baseline methods in the task of near-duplicate image detection in the UKBench and an in-house private dataset.
arXiv Detail & Related papers (2024-10-25T09:56:15Z) - Improving Model Training via Self-learned Label Representations [5.969349640156469]
We show that more sophisticated label representations are better for classification than the usual one-hot encoding.
We propose Learning with Adaptive Labels (LwAL) algorithm, which simultaneously learns the label representation while training for the classification task.
Our algorithm introduces negligible additional parameters and has a minimal computational overhead.
arXiv Detail & Related papers (2022-09-09T21:10:43Z) - Seamless Iterative Semi-Supervised Correction of Imperfect Labels in
Microscopy Images [57.42492501915773]
In-vitro tests are an alternative to animal testing for the toxicity of medical devices.
Human fatigue plays a role in error making, making the use of deep learning appealing.
We propose Seamless Iterative Semi-Supervised correction of Imperfect labels (SISSI)
Our method successfully provides an adaptive early learning correction technique for object detection.
arXiv Detail & Related papers (2022-08-05T18:52:20Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - When Deep Learners Change Their Mind: Learning Dynamics for Active
Learning [32.792098711779424]
In this paper, we propose a new informativeness-based active learning method.
Our measure is derived from the learning dynamics of a neural network.
We show that label-dispersion is a promising predictor of the uncertainty of the network.
arXiv Detail & Related papers (2021-07-30T15:30:17Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - A Weakly Supervised Convolutional Network for Change Segmentation and
Classification [91.3755431537592]
We present W-CDNet, a novel weakly supervised change detection network that can be trained with image-level semantic labels.
W-CDNet can be trained with two different types of datasets, either containing changed image pairs only or a mixture of changed and unchanged image pairs.
arXiv Detail & Related papers (2020-11-06T20:20:45Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - What Do Neural Networks Learn When Trained With Random Labels? [20.54410239839646]
We study deep neural networks (DNNs) trained on natural image data with entirely random labels.
We show analytically for convolutional and fully connected networks that an alignment between the principal components of network parameters and data takes place when training with random labels.
We show how this alignment produces a positive transfer: networks pre-trained with random labels train faster downstream compared to training from scratch.
arXiv Detail & Related papers (2020-06-18T12:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.