Trusting small training dataset for supervised change detection
- URL: http://arxiv.org/abs/2104.05443v1
- Date: Fri, 9 Apr 2021 10:57:03 GMT
- Title: Trusting small training dataset for supervised change detection
- Authors: Sudipan Saha, Biplab Banerjee, Xiao Xiang Zhu
- Abstract summary: We show that geographically diverse training dataset can yield significant improvement over less diverse training datasets of the same size.
We propose a simple confidence indicator for verifying the trustworthiness/confidence of supervised models trained with small labeled dataset.
- Score: 28.135066372665957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) based supervised change detection (CD) models require
large labeled training data. Due to the difficulty of collecting labeled
multi-temporal data, unsupervised methods are preferred in the CD literature.
However, unsupervised methods cannot fully exploit the potentials of
data-driven deep learning and thus they are not absolute alternative to the
supervised methods. This motivates us to look deeper into the supervised DL
methods and investigate how they can be adopted intelligently for CD by
minimizing the requirement of labeled training data. Towards this, in this work
we show that geographically diverse training dataset can yield significant
improvement over less diverse training datasets of the same size. We propose a
simple confidence indicator for verifying the trustworthiness/confidence of
supervised models trained with small labeled dataset. Moreover, we show that
for the test cases where supervised CD model is found to be less
confident/trustworthy, unsupervised methods often produce better result than
the supervised ones.
Related papers
- Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models [7.742594744641462]
Machine unlearning aims to remove information derived from forgotten data while preserving that of the remaining dataset in a well-trained model.
We propose a supervision-free unlearning approach that operates without the need for labels during the unlearning process.
arXiv Detail & Related papers (2024-03-31T00:29:00Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Unsupervised Clustering Active Learning for Person Re-identification [5.705895028045853]
Unsupervised re-id methods rely on unlabeled data to train models.
We present a Unsupervised Clustering Active Learning (UCAL) re-id deep learning approach.
It is capable of incrementally discovering the representative centroid-pairs.
arXiv Detail & Related papers (2021-12-26T02:54:35Z) - Unsupervised Finetuning [80.58625921631506]
We propose two strategies to combine source and target data into unsupervised finetuning.
The motivation of the former strategy is to add a small portion of source data back to occupy their pretrained representation space.
The motivation of the latter strategy is to increase the data density and help learn more compact representation.
arXiv Detail & Related papers (2021-10-18T17:57:05Z) - Self Training with Ensemble of Teacher Models [8.257085583227695]
In order to train robust deep learning models, large amounts of labelled data is required.
In the absence of such large repositories of labelled data, unlabeled data can be exploited for the same.
Semi-Supervised learning aims to utilize such unlabeled data for training classification models.
arXiv Detail & Related papers (2021-07-17T09:44:09Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - Unsupervised Noisy Tracklet Person Re-identification [100.85530419892333]
We present a novel selective tracklet learning (STL) approach that can train discriminative person re-id models from unlabelled tracklet data.
This avoids the tedious and costly process of exhaustively labelling person image/tracklet true matching pairs across camera views.
Our method is particularly more robust against arbitrary noisy data of raw tracklets therefore scalable to learning discriminative models from unconstrained tracking data.
arXiv Detail & Related papers (2021-01-16T07:31:00Z) - Robust Disentanglement of a Few Factors at a Time [5.156484100374058]
We introduce population-based training (PBT) for improving consistency in training variational autoencoders (VAEs)
We then use Unsupervised Disentanglement Ranking (UDR) as an unsupervised to score models in our PBT-VAE training and show how models trained this way tend to consistently disentangle only a subset of the generative factors.
We show striking improvement in state-of-the-art unsupervised disentanglement performance and robustness across multiple datasets and metrics.
arXiv Detail & Related papers (2020-10-26T12:34:23Z) - Semi-supervised and Unsupervised Methods for Heart Sounds Classification
in Restricted Data Environments [4.712158833534046]
This study uses various supervised, semi-supervised and unsupervised approaches on the PhysioNet/CinC 2016 Challenge dataset.
A GAN based semi-supervised method is proposed, which allows the usage of unlabelled data samples to boost the learning of data distribution.
In particular, the unsupervised feature extraction using 1D CNN Autoencoder coupled with one-class SVM obtains good performance without any data labelling.
arXiv Detail & Related papers (2020-06-04T02:07:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.