A Bayesian-inspired, deep learning-based, semi-supervised domain
adaptation technique for land cover mapping
- URL: http://arxiv.org/abs/2005.11930v2
- Date: Wed, 10 Mar 2021 05:57:44 GMT
- Title: A Bayesian-inspired, deep learning-based, semi-supervised domain
adaptation technique for land cover mapping
- Authors: Benjamin Lucas, Charlotte Pelletier, Daniel Schmidt, Geoffrey I. Webb,
and Fran\c{c}ois Petitjean
- Abstract summary: Sourcerer is a semi-supervised DA technique for producing land cover maps from SITS data.
It takes a convolutional neural network trained on a source domain and then trains further on the available target domain.
We show that Sourcerer outperforms all other methods for any quantity of labelled target data available.
- Score: 4.167265971166947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Land cover maps are a vital input variable to many types of environmental
research and management. While they can be produced automatically by machine
learning techniques, these techniques require substantial training data to
achieve high levels of accuracy, which are not always available. One technique
researchers use when labelled training data are scarce is domain adaptation
(DA) -- where data from an alternate region, known as the source domain, are
used to train a classifier and this model is adapted to map the study region,
or target domain. The scenario we address in this paper is known as
semi-supervised DA, where some labelled samples are available in the target
domain. In this paper we present Sourcerer, a Bayesian-inspired, deep
learning-based, semi-supervised DA technique for producing land cover maps from
SITS data. The technique takes a convolutional neural network trained on a
source domain and then trains further on the available target domain with a
novel regularizer applied to the model weights. The regularizer adjusts the
degree to which the model is modified to fit the target data, limiting the
degree of change when the target data are few in number and increasing it as
target data quantity increases. Our experiments on Sentinel-2 time series
images compare Sourcerer with two state-of-the-art semi-supervised domain
adaptation techniques and four baseline models. We show that on two different
source-target domain pairings Sourcerer outperforms all other methods for any
quantity of labelled target data available. In fact, the results on the more
difficult target domain show that the starting accuracy of Sourcerer (when no
labelled target data are available), 74.2%, is greater than the next-best
state-of-the-art method trained on 20,000 labelled target instances.
Related papers
- Robust Source-Free Domain Adaptation for Fundus Image Segmentation [3.585032903685044]
Unlabelled Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled data to the target domain with only unlabelled data.
In this study, we propose a two-stage training stage for robust domain adaptation.
We propose a novel robust pseudo-label and pseudo-boundary (PLPB) method, which effectively utilizes unlabeled target data to generate pseudo labels and pseudo boundaries.
arXiv Detail & Related papers (2023-10-25T14:25:18Z) - GaitSADA: Self-Aligned Domain Adaptation for mmWave Gait Recognition [14.750765172614836]
mmWave radar-based gait recognition is a novel user identification method that captures human gait biometrics from mmWave radar return signals.
To mitigate this issue, a novel self-aligned domain adaptation method called GaitSADA is proposed.
Experiments show that GaitSADA outperforms representative domain adaptation methods with an improvement ranging from 15.41% to 26.32% on average accuracy in low data regimes.
arXiv Detail & Related papers (2023-01-31T03:21:08Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - ProCST: Boosting Semantic Segmentation using Progressive Cyclic
Style-Transfer [38.03127458140549]
We propose a novel two-stage framework for improving domain adaptation techniques.
In the first step, we progressively train a multi-scale neural network to perform an initial transfer between the source data to the target data.
This new data has a reduced domain gap from the desired target domain, and the applied UDA approach further closes the gap.
arXiv Detail & Related papers (2022-04-25T18:01:05Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.