SemI2I: Semantically Consistent Image-to-Image Translation for Domain
Adaptation of Remote Sensing Data
- URL: http://arxiv.org/abs/2002.05925v2
- Date: Fri, 21 Feb 2020 09:21:35 GMT
- Title: SemI2I: Semantically Consistent Image-to-Image Translation for Domain
Adaptation of Remote Sensing Data
- Authors: Onur Tasar, S L Happy, Yuliya Tarabalka, Pierre Alliez
- Abstract summary: We propose a new data augmentation approach that transfers the style of test data to training data using generative adversarial networks.
Our semantic segmentation framework consists in first training a U-net from the real training data and then fine-tuning it on the test stylized fake training data generated by the proposed approach.
- Score: 7.577893526158495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although convolutional neural networks have been proven to be an effective
tool to generate high quality maps from remote sensing images, their
performance significantly deteriorates when there exists a large domain shift
between training and test data. To address this issue, we propose a new data
augmentation approach that transfers the style of test data to training data
using generative adversarial networks. Our semantic segmentation framework
consists in first training a U-net from the real training data and then
fine-tuning it on the test stylized fake training data generated by the
proposed approach. Our experimental results prove that our framework
outperforms the existing domain adaptation methods.
Related papers
- Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Supervised Homography Learning with Realistic Dataset Generation [60.934401870005026]
We propose an iterative framework, which consists of two phases: a generation phase and a training phase.
In the generation phase, given an unlabeled image pair, we utilize the pre-estimated dominant plane masks and homography of the pair.
In the training phase, the generated data is used to train the supervised homography network.
arXiv Detail & Related papers (2023-07-28T07:03:18Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - RAIS: Robust and Accurate Interactive Segmentation via Continual
Learning [16.382862088005087]
We propose RAIS, a robust and accurate architecture for interactive segmentation with continuous learning.
For efficient learning on the test set, we propose a novel optimization strategy to update global and local parameters.
Our method also shows its robustness in the datasets of remote sensing and medical imaging.
arXiv Detail & Related papers (2022-10-20T03:05:44Z) - Exploring Data Aggregation and Transformations to Generalize across
Visual Domains [0.0]
This thesis contributes to research on Domain Generalization (DG), Domain Adaptation (DA) and their variations.
We propose new frameworks for Domain Generalization and Domain Adaptation which make use of feature aggregation strategies and visual transformations.
We show how our proposed solutions outperform competitive state-of-the-art approaches in established DG and DA benchmarks.
arXiv Detail & Related papers (2021-08-20T14:58:14Z) - Learning to Segment Human Body Parts with Synthetically Trained Deep
Convolutional Networks [58.0240970093372]
This paper presents a new framework for human body part segmentation based on Deep Convolutional Neural Networks trained using only synthetic data.
The proposed approach achieves cutting-edge results without the need of training the models with real annotated data of human body parts.
arXiv Detail & Related papers (2021-02-02T12:26:50Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for
Building Extraction from Aerial Images [3.786567767772753]
We propose a segmentation network based on a domain adaptive transfer attack scheme for building extraction from aerial images.
The proposed system combines the domain transfer and adversarial attack concepts.
Cross-dataset experiments and the ablation study are conducted for the three different datasets.
arXiv Detail & Related papers (2020-04-11T06:17:13Z) - The Utility of Feature Reuse: Transfer Learning in Data-Starved Regimes [6.419457653976053]
We describe a transfer learning use case for a domain with a data-starved regime.
We evaluate the effectiveness of convolutional feature extraction and fine-tuning.
We conclude that transfer learning enhances the performance of CNN architectures in data-starved regimes.
arXiv Detail & Related papers (2020-02-29T18:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.