ChessMix: Spatial Context Data Augmentation for Remote Sensing Semantic
Segmentation
- URL: http://arxiv.org/abs/2108.11535v1
- Date: Thu, 26 Aug 2021 01:01:43 GMT
- Title: ChessMix: Spatial Context Data Augmentation for Remote Sensing Semantic
Segmentation
- Authors: Matheus Barros Pereira, Jefersson Alex dos Santos
- Abstract summary: ChessMix creates new synthetic images by mixing transformed mini-patches across the dataset in a chessboard-like grid.
Results in three diverse well-known remote sensing datasets show that ChessMix is capable of improving the segmentation of objects with few labeled pixels.
- Score: 1.0152838128195467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Labeling semantic segmentation datasets is a costly and laborious process if
compared with tasks like image classification and object detection. This is
especially true for remote sensing applications that not only work with
extremely high spatial resolution data but also commonly require the knowledge
of experts of the area to perform the manual labeling. Data augmentation
techniques help to improve deep learning models under the circumstance of few
and imbalanced labeled samples. In this work, we propose a novel data
augmentation method focused on exploring the spatial context of remote sensing
semantic segmentation. This method, ChessMix, creates new synthetic images from
the existing training set by mixing transformed mini-patches across the dataset
in a chessboard-like grid. ChessMix prioritizes patches with more examples of
the rarest classes to alleviate the imbalance problems. The results in three
diverse well-known remote sensing datasets show that this is a promising
approach that helps to improve the networks' performance, working especially
well in datasets with few available data. The results also show that ChessMix
is capable of improving the segmentation of objects with few labeled pixels
when compared to the most common data augmentation methods widely used.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Knowledge Combination to Learn Rotated Detection Without Rotated
Annotation [53.439096583978504]
Rotated bounding boxes drastically reduce output ambiguity of elongated objects.
Despite the effectiveness, rotated detectors are not widely employed.
We propose a framework that allows the model to predict precise rotated boxes.
arXiv Detail & Related papers (2023-04-05T03:07:36Z) - MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset [19.45520684918576]
We propose a contrastive learning method, called $textbfMask$ed $textbfCon$trastive learning($textbfMaskCon$)
For each sample our method generates soft-labels with the aid of coarse labels against other samples and another augmented view of the sample in question.
Our method achieves significant improvement over the current state-of-the-art in various datasets.
arXiv Detail & Related papers (2023-03-22T17:08:31Z) - Change Detection from Synthetic Aperture Radar Images via Graph-Based
Knowledge Supplement Network [36.41983596642354]
We propose a Graph-based Knowledge Supplement Network (GKSNet) for image change detection.
To be more specific, we extract discriminative information from the existing labeled dataset as additional knowledge.
To validate the proposed method, we conducted extensive experiments on four SAR datasets.
arXiv Detail & Related papers (2022-01-22T02:50:50Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - On Training Sketch Recognizers for New Domains [3.8149289266694466]
We show that the ecological validity of the data collection protocol and the ability to accommodate small datasets are significant factors impacting recognizer accuracy in realistic scenarios.
We demonstrate that in realistic scenarios where data is scarce and expensive, standard measures taken for adapting deep learners to small datasets fall short of comparing favorably with alternatives.
arXiv Detail & Related papers (2021-04-18T13:24:49Z) - Mask-based Data Augmentation for Semi-supervised Semantic Segmentation [3.946367634483361]
We propose a new approach for data augmentation, termed ComplexMix, which incorporates aspects of CutMix and ClassMix with improved performance.
The proposed approach has the ability to control the complexity of the augmented data while attempting to be semantically-correct.
Experimental results show that our method yields improvement over state-of-the-art methods on standard datasets for semantic image segmentation.
arXiv Detail & Related papers (2021-01-25T15:09:34Z) - i-Mix: A Domain-Agnostic Strategy for Contrastive Representation
Learning [117.63815437385321]
We propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning.
In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains.
arXiv Detail & Related papers (2020-10-17T23:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.