An Interpretable Deep Semantic Segmentation Method for Earth Observation
- URL: http://arxiv.org/abs/2210.12820v1
- Date: Sun, 23 Oct 2022 18:46:44 GMT
- Title: An Interpretable Deep Semantic Segmentation Method for Earth Observation
- Authors: Ziyang Zhang, Plamen Angelov, Eduardo Soares, Nicolas Longepe, Pierre
Philippe Mathieu
- Abstract summary: We introduce a prototype-based interpretable deep semantic segmentation (IDSS) method.
Its parameters are in orders of magnitude less than the number of parameters used by deep networks such as U-Net and are clearly interpretable by humans.
Results have demonstrated that IDSS could surpass other algorithms, including U-Net, in terms of IoU (Intersection over Union) total water and Recall total water.
- Score: 0.7499722271664145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Earth observation is fundamental for a range of human activities including
flood response as it offers vital information to decision makers. Semantic
segmentation plays a key role in mapping the raw hyper-spectral data coming
from the satellites into a human understandable form assigning class labels to
each pixel. In this paper, we introduce a prototype-based interpretable deep
semantic segmentation (IDSS) method, which is highly accurate as well as
interpretable. Its parameters are in orders of magnitude less than the number
of parameters used by deep networks such as U-Net and are clearly interpretable
by humans. The proposed here IDSS offers a transparent structure that allows
users to inspect and audit the algorithm's decision. Results have demonstrated
that IDSS could surpass other algorithms, including U-Net, in terms of IoU
(Intersection over Union) total water and Recall total water. We used
WorldFloods data set for our experiments and plan to use the semantic
segmentation results combined with masks for permanent water to detect flood
events.
Related papers
- LAC-Net: Linear-Fusion Attention-Guided Convolutional Network for Accurate Robotic Grasping Under the Occlusion [79.22197702626542]
This paper introduces a framework that explores amodal segmentation for robotic grasping in cluttered scenes.
We propose a Linear-fusion Attention-guided Convolutional Network (LAC-Net)
The results on different datasets show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-08-06T14:50:48Z) - Evaluating the Efficacy of Cut-and-Paste Data Augmentation in Semantic Segmentation for Satellite Imagery [4.499833362998487]
This study explores the effectiveness of a Cut-and-Paste augmentation technique for semantic segmentation in satellite images.
We adapt this augmentation, which usually requires labeled instances, to the case of semantic segmentation.
Using the DynamicEarthNet dataset and a U-Net model for evaluation, we found that this augmentation significantly enhances the mIoU score on the test set from 37.9 to 44.1.
arXiv Detail & Related papers (2024-04-08T17:18:30Z) - Resolution-Aware Design of Atrous Rates for Semantic Segmentation
Networks [7.58745191859815]
DeepLab is a widely used deep neural network for semantic segmentation, whose success is attributed to its parallel architecture called atrous spatial pyramid pooling (ASPP)
fixed values of atrous rates are used for the ASPP module, which restricts the size of its field of view.
This study proposes practical guidelines for obtaining an optimal atrous rate.
arXiv Detail & Related papers (2023-07-26T13:11:48Z) - Human Semantic Segmentation using Millimeter-Wave Radar Sparse Point
Clouds [3.3888257250564364]
This paper presents a framework for semantic segmentation on sparse sequential point clouds of millimeter-wave radar.
The sparsity and capturing temporal-topological features of mmWave data is still a problem.
We introduce graph structure and topological features to the point cloud and propose a semantic segmentation framework.
Our model achieves mean accuracy on a custom dataset by $mathbf82.31%$ and outperforms state-of-the-art algorithms.
arXiv Detail & Related papers (2023-04-27T12:28:06Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Context-Preserving Instance-Level Augmentation and Deformable
Convolution Networks for SAR Ship Detection [50.53262868498824]
Shape deformation of targets in SAR image due to random orientation and partial information loss is an essential challenge in SAR ship detection.
We propose a data augmentation method to train a deep network that is robust to partial information loss within the targets.
arXiv Detail & Related papers (2022-02-14T07:01:01Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - An Underwater Image Semantic Segmentation Method Focusing on Boundaries
and a Real Underwater Scene Semantic Segmentation Dataset [41.842352295729555]
We label and establish the first underwater semantic segmentation dataset of real scene(DUT-USEG:DUT Underwater dataset)
We propose a semi-supervised underwater semantic segmentation network focusing on the boundaries(US-Net: Underwater Network)
Experiments show that the proposed method improves by 6.7% in three categories of holothurian, echinus, starfish in DUT-USEG dataset, and state-of-the-art results.
arXiv Detail & Related papers (2021-08-26T12:05:08Z) - Semantic Attention and Scale Complementary Network for Instance
Segmentation in Remote Sensing Images [54.08240004593062]
We propose an end-to-end multi-category instance segmentation model, which consists of a Semantic Attention (SEA) module and a Scale Complementary Mask Branch (SCMB)
SEA module contains a simple fully convolutional semantic segmentation branch with extra supervision to strengthen the activation of interest instances on the feature map.
SCMB extends the original single mask branch to trident mask branches and introduces complementary mask supervision at different scales.
arXiv Detail & Related papers (2021-07-25T08:53:59Z) - S3Net: 3D LiDAR Sparse Semantic Segmentation Network [1.330528227599978]
S3Net is a novel convolutional neural network for LiDAR point cloud semantic segmentation.
It adopts an encoder-decoder backbone that consists of Sparse Intra-channel Attention Module (SIntraAM) and Sparse Inter-channel Attention Module (SInterAM)
arXiv Detail & Related papers (2021-03-15T22:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.