Mapping Temporary Slums from Satellite Imagery using a Semi-Supervised
Approach
- URL: http://arxiv.org/abs/2204.04419v1
- Date: Sat, 9 Apr 2022 08:02:32 GMT
- Title: Mapping Temporary Slums from Satellite Imagery using a Semi-Supervised
Approach
- Authors: M. Fasi ur Rehman, Izza Ali, Waqas Sultani, Mohsen Ali
- Abstract summary: One billion people worldwide are estimated to be living in slums.
Small, scattered and temporary slums make data collection and labeling tedious and time-consuming.
We present a semi-supervised deep learning segmentation-based approach to detect temporary slums.
- Score: 5.830619388189557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One billion people worldwide are estimated to be living in slums, and
documenting and analyzing these regions is a challenging task. As compared to
regular slums; the small, scattered and temporary nature of temporary slums
makes data collection and labeling tedious and time-consuming. To tackle this
challenging problem of temporary slums detection, we present a semi-supervised
deep learning segmentation-based approach; with the strategy to detect initial
seed images in the zero-labeled data settings. A small set of seed samples (32
in our case) are automatically discovered by analyzing the temporal changes,
which are manually labeled to train a segmentation and representation learning
module. The segmentation module gathers high dimensional image representations,
and the representation learning module transforms image representations into
embedding vectors. After that, a scoring module uses the embedding vectors to
sample images from a large pool of unlabeled images and generates pseudo-labels
for the sampled images. These sampled images with their pseudo-labels are added
to the training set to update the segmentation and representation learning
modules iteratively. To analyze the effectiveness of our technique, we
construct a large geographically marked dataset of temporary slums. This
dataset constitutes more than 200 potential temporary slum locations (2.28
square kilometers) found by sieving sixty-eight thousand images from 12
metropolitan cities of Pakistan covering 8000 square kilometers. Furthermore,
our proposed method outperforms several competitive semi-supervised semantic
segmentation baselines on a similar setting. The code and the dataset will be
made publicly available.
Related papers
- Leveraging image captions for selective whole slide image annotation [0.37334049820361814]
This paper focuses on identifying and annotating specific image regions that optimize model training.
Prototype sampling is more effective than random and diversity sampling in identifying annotation regions with valuable training information.
Our results show that prototype sampling is more effective than random and diversity sampling in identifying annotation regions with valuable training information.
arXiv Detail & Related papers (2024-07-08T20:05:21Z) - A new dataset for measuring the performance of blood vessel segmentation methods under distribution shifts [0.0]
VessMAP is a heterogeneous blood vessel segmentation dataset acquired by carefully sampling relevant images from a larger non-annotated dataset.
A methodology was developed to select both prototypical and atypical samples from the base dataset.
To demonstrate the potential of the new dataset, we show that the validation performance of a neural network changes significantly depending on the splits used for training the network.
arXiv Detail & Related papers (2023-01-11T15:31:15Z) - From colouring-in to pointillism: revisiting semantic segmentation
supervision [48.637031591058175]
We propose a pointillist approach for semantic segmentation annotation, where only point-wise yes/no questions are answered.
We collected and released 22.6M point labels over 4,171 classes on the Open Images dataset.
arXiv Detail & Related papers (2022-10-25T16:42:03Z) - Aerial Scene Parsing: From Tile-level Scene Classification to Pixel-wise
Semantic Labeling [48.30060717413166]
Given an aerial image, aerial scene parsing (ASP) targets to interpret the semantic structure of the image content by assigning a semantic label to every pixel of the image.
We present a large-scale scene classification dataset that contains one million aerial images termed Million-AID.
We also report benchmarking experiments using classical convolutional neural networks (CNNs) to achieve pixel-wise semantic labeling.
arXiv Detail & Related papers (2022-01-06T07:40:47Z) - A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic
Segmentation [40.27705176115985]
Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest.
We propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels.
Our proposed learning model can be viewed as a pixel-level meta-learner.
arXiv Detail & Related papers (2021-11-02T08:28:11Z) - Active Learning for Improved Semi-Supervised Semantic Segmentation in
Satellite Images [1.0152838128195467]
Semi-supervised techniques generate pseudo-labels from a small set of labeled examples.
We propose to use an active learning-based sampling strategy to select a highly representative set of labeled training data.
We report a 27% improvement in mIoU with as little as 2% labeled data using active learning sampling strategies.
arXiv Detail & Related papers (2021-10-15T00:29:31Z) - Region-level Active Learning for Cluttered Scenes [60.93811392293329]
We introduce a new strategy that subsumes previous Image-level and Object-level approaches into a generalized, Region-level approach.
We show that this approach significantly decreases labeling effort and improves rare object search on realistic data with inherent class-imbalance and cluttered scenes.
arXiv Detail & Related papers (2021-08-20T14:02:38Z) - Segmentation of VHR EO Images using Unsupervised Learning [19.00071868539993]
We propose an unsupervised semantic segmentation method that can be trained using just a single unlabeled scene.
The proposed method exploits this property to sample smaller patches from the larger scene.
After unsupervised training on the target image/scene, the model automatically segregates the major classes present in the scene and produces the segmentation map.
arXiv Detail & Related papers (2021-07-09T11:42:48Z) - Non-Salient Region Object Mining for Weakly Supervised Semantic
Segmentation [64.2719590819468]
We propose a non-salient region object mining approach for weakly supervised semantic segmentation.
A potential object mining module is proposed to reduce the false-negative rate in pseudo labels.
Our non-salient region masking module helps further discover the objects in the non-salient region.
arXiv Detail & Related papers (2021-03-26T16:44:03Z) - Geography-Aware Self-Supervised Learning [79.4009241781968]
We show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks.
We propose novel training methods that exploit the spatially aligned structure of remote sensing data.
Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing.
arXiv Detail & Related papers (2020-11-19T17:29:13Z) - Naive-Student: Leveraging Semi-Supervised Learning in Video Sequences
for Urban Scene Segmentation [57.68890534164427]
In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences and extra images to improve the performance on urban scene segmentation.
We simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data.
Our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks.
arXiv Detail & Related papers (2020-05-20T18:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.