Deep Active Learning in Remote Sensing for data efficient Change
Detection
- URL: http://arxiv.org/abs/2008.11201v1
- Date: Tue, 25 Aug 2020 17:58:17 GMT
- Title: Deep Active Learning in Remote Sensing for data efficient Change
Detection
- Authors: V\'it R\r{u}\v{z}i\v{c}ka, Stefano D'Aronco, Jan Dirk Wegner, Konrad
Schindler
- Abstract summary: We investigate active learning in the context of deep neural network models for change detection and map updating.
In active learning, one starts from a minimal set of training examples and progressively chooses informative samples annotated by a user.
We show that active learning successfully finds highly informative samples and automatically balances the training distribution.
- Score: 26.136331738529243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate active learning in the context of deep neural network models
for change detection and map updating. Active learning is a natural choice for
a number of remote sensing tasks, including the detection of local surface
changes: changes are on the one hand rare and on the other hand their
appearance is varied and diffuse, making it hard to collect a representative
training set in advance. In the active learning setting, one starts from a
minimal set of training examples and progressively chooses informative samples
that are annotated by a user and added to the training set. Hence, a core
component of an active learning system is a mechanism to estimate model
uncertainty, which is then used to pick uncertain, informative samples. We
study different mechanisms to capture and quantify this uncertainty when
working with deep networks, based on the variance or entropy across explicit or
implicit model ensembles. We show that active learning successfully finds
highly informative samples and automatically balances the training
distribution, and reaches the same performance as a model supervised with a
large, pre-annotated training set, with $\approx$99% fewer annotated samples.
Related papers
- Gaussian Switch Sampling: A Second Order Approach to Active Learning [11.775252660867285]
In active learning, acquisition functions define informativeness directly on the representation position within the model manifold.
We propose a grounded second-order definition of information content and sample importance within the context of active learning.
We show that our definition produces highly accurate importance scores even when the model representations are constrained by the lack of training data.
arXiv Detail & Related papers (2023-02-16T15:24:56Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - Automatic Change-Point Detection in Time Series via Deep Learning [8.43086628139493]
We show how to automatically generate new offline detection methods based on training a neural network.
We present theory that quantifies the error rate for such an approach, and how it depends on the amount of training data.
Our method also shows strong results in detecting and localising changes in activity based on accelerometer data.
arXiv Detail & Related papers (2022-11-07T20:59:14Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - BatchFormer: Learning to Explore Sample Relationships for Robust
Representation Learning [93.38239238988719]
We propose to enable deep neural networks with the ability to learn the sample relationships from each mini-batch.
BatchFormer is applied into the batch dimension of each mini-batch to implicitly explore sample relationships during training.
We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications.
arXiv Detail & Related papers (2022-03-03T05:31:33Z) - When Deep Learners Change Their Mind: Learning Dynamics for Active
Learning [32.792098711779424]
In this paper, we propose a new informativeness-based active learning method.
Our measure is derived from the learning dynamics of a neural network.
We show that label-dispersion is a promising predictor of the uncertainty of the network.
arXiv Detail & Related papers (2021-07-30T15:30:17Z) - Active Learning for Sequence Tagging with Deep Pre-trained Models and
Bayesian Uncertainty Estimates [52.164757178369804]
Recent advances in transfer learning for natural language processing in conjunction with active learning open the possibility to significantly reduce the necessary annotation budget.
We conduct an empirical study of various Bayesian uncertainty estimation methods and Monte Carlo dropout options for deep pre-trained models in the active learning framework.
We also demonstrate that to acquire instances during active learning, a full-size Transformer can be substituted with a distilled version, which yields better computational performance.
arXiv Detail & Related papers (2021-01-20T13:59:25Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Efficient Learning of Model Weights via Changing Features During
Training [0.0]
We propose a machine learning model, which dynamically changes the features during training.
Our main motivation is to update the model in a small content during the training process with replacing less descriptive features to new ones from a large pool.
arXiv Detail & Related papers (2020-02-21T12:38:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.