Segmentation of VHR EO Images using Unsupervised Learning
- URL: http://arxiv.org/abs/2108.04222v2
- Date: Tue, 10 Aug 2021 08:55:26 GMT
- Title: Segmentation of VHR EO Images using Unsupervised Learning
- Authors: Sudipan Saha and Lichao Mou and Muhammad Shahzad and Xiao Xiang Zhu
- Abstract summary: We propose an unsupervised semantic segmentation method that can be trained using just a single unlabeled scene.
The proposed method exploits this property to sample smaller patches from the larger scene.
After unsupervised training on the target image/scene, the model automatically segregates the major classes present in the scene and produces the segmentation map.
- Score: 19.00071868539993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation is a crucial step in many Earth observation tasks.
Large quantity of pixel-level annotation is required to train deep networks for
semantic segmentation. Earth observation techniques are applied to varieties of
applications and since classes vary widely depending on the applications,
therefore, domain knowledge is often required to label Earth observation
images, impeding availability of labeled training data in many Earth
observation applications. To tackle these challenges, in this paper we propose
an unsupervised semantic segmentation method that can be trained using just a
single unlabeled scene. Remote sensing scenes are generally large. The proposed
method exploits this property to sample smaller patches from the larger scene
and uses deep clustering and contrastive learning to refine the weights of a
lightweight deep model composed of a series of the convolution layers along
with an embedded channel attention. After unsupervised training on the target
image/scene, the model automatically segregates the major classes present in
the scene and produces the segmentation map. Experimental results on the
Vaihingen dataset demonstrate the efficacy of the proposed method.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Semi-Supervised Semantic Segmentation Based on Pseudo-Labels: A Survey [49.47197748663787]
This review aims to provide a first comprehensive and organized overview of the state-of-the-art research results on pseudo-label methods in the field of semi-supervised semantic segmentation.
In addition, we explore the application of pseudo-label technology in medical and remote-sensing image segmentation.
arXiv Detail & Related papers (2024-03-04T10:18:38Z) - Task Specific Pretraining with Noisy Labels for Remote Sensing Image Segmentation [18.598405597933752]
Self-supervision provides remote sensing a tool to reduce the amount of exact, human-crafted geospatial annotations.
In this work, we propose to exploit noisy semantic segmentation maps for model pretraining.
The results from two datasets indicate the effectiveness of task-specific supervised pretraining with noisy labels.
arXiv Detail & Related papers (2024-02-25T18:01:42Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Location-Aware Self-Supervised Transformers [74.76585889813207]
We propose to pretrain networks for semantic segmentation by predicting the relative location of image parts.
We control the difficulty of the task by masking a subset of the reference patch features visible to those of the query.
Our experiments show that this location-aware pretraining leads to representations that transfer competitively to several challenging semantic segmentation benchmarks.
arXiv Detail & Related papers (2022-12-05T16:24:29Z) - Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised
Semantic Segmentation and Localization [98.46318529630109]
We take inspiration from traditional spectral segmentation methods by reframing image decomposition as a graph partitioning problem.
We find that these eigenvectors already decompose an image into meaningful segments, and can be readily used to localize objects in a scene.
By clustering the features associated with these segments across a dataset, we can obtain well-delineated, nameable regions.
arXiv Detail & Related papers (2022-05-16T17:47:44Z) - Mars Terrain Segmentation with Less Labels [1.1745324895296465]
This research proposes a semi-supervised learning framework for Mars terrain segmentation.
It incorporates a backbone module which is trained using a contrastive loss function and an output atrous convolution module.
The proposed model is able to achieve a segmentation accuracy of 91.1% using only 161 training images.
arXiv Detail & Related papers (2022-02-01T22:25:15Z) - Unsupervised Image Segmentation by Mutual Information Maximization and
Adversarial Regularization [7.165364364478119]
We propose a novel fully unsupervised semantic segmentation method, the so-called Information Maximization and Adrial Regularization (InMARS)
Inspired by human perception which parses a scene into perceptual groups, our proposed approach first partitions an input image into meaningful regions (also known as superpixels)
Next, it utilizes Mutual-Information-Maximization followed by an adversarial training strategy to cluster these regions into semantically meaningful classes.
Our experiments demonstrate that our method achieves the state-of-the-art performance on two commonly used unsupervised semantic segmentation datasets.
arXiv Detail & Related papers (2021-07-01T18:36:27Z) - Three Ways to Improve Semantic Segmentation with Self-Supervised Depth
Estimation [90.87105131054419]
We present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains.
arXiv Detail & Related papers (2020-12-19T21:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.