Cross Modal Distillation for Flood Extent Mapping
- URL: http://arxiv.org/abs/2302.08180v1
- Date: Thu, 16 Feb 2023 09:57:08 GMT
- Title: Cross Modal Distillation for Flood Extent Mapping
- Authors: Shubhika Garg, Ben Feinstein, Shahar Timnat, Vishal Batchu, Gideon
Dror, Adi Gerzi Rosenthal, Varun Gulshan
- Abstract summary: We explore ML techniques that improve the flood detection module of an operational early flood warning system.
Our method exploits an unlabelled dataset of paired multi-spectral and Synthetic Aperture Radar (SAR) imagery.
- Score: 0.41562334038629595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing intensity and frequency of floods is one of the many
consequences of our changing climate. In this work, we explore ML techniques
that improve the flood detection module of an operational early flood warning
system. Our method exploits an unlabelled dataset of paired multi-spectral and
Synthetic Aperture Radar (SAR) imagery to reduce the labeling requirements of a
purely supervised learning method. Prior works have used unlabelled data by
creating weak labels out of them. However, from our experiments we noticed that
such a model still ends up learning the label mistakes in those weak labels.
Motivated by knowledge distillation and semi supervised learning, we explore
the use of a teacher to train a student with the help of a small hand labelled
dataset and a large unlabelled dataset. Unlike the conventional self
distillation setup, we propose a cross modal distillation framework that
transfers supervision from a teacher trained on richer modality (multi-spectral
images) to a student model trained on SAR imagery. The trained models are then
tested on the Sen1Floods11 dataset. Our model outperforms the Sen1Floods11
baseline model trained on the weak labeled SAR imagery by an absolute margin of
6.53% Intersection-over-Union (IoU) on the test split.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Automated Labeling of German Chest X-Ray Radiology Reports using Deep
Learning [50.591267188664666]
We propose a deep learning-based CheXpert label prediction model, pre-trained on reports labeled by a rule-based German CheXpert model.
Our results demonstrate the effectiveness of our approach, which significantly outperformed the rule-based model on all three tasks.
arXiv Detail & Related papers (2023-06-09T16:08:35Z) - Augment and Criticize: Exploring Informative Samples for Semi-Supervised
Monocular 3D Object Detection [64.65563422852568]
We improve the challenging monocular 3D object detection problem with a general semi-supervised framework.
We introduce a novel, simple, yet effective Augment and Criticize' framework that explores abundant informative samples from unlabeled data.
The two new detectors, dubbed 3DSeMo_DLE and 3DSeMo_FLEX, achieve state-of-the-art results with remarkable improvements for over 3.5% AP_3D/BEV (Easy) on KITTI.
arXiv Detail & Related papers (2023-03-20T16:28:15Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Semi-supervised Deep Learning for Image Classification with Distribution
Mismatch: A Survey [1.5469452301122175]
Deep learning models rely on the abundance of labelled observations to train a prospective model.
It is expensive to gather labelled observations of data, making the usage of deep learning models not ideal.
In many situations different unlabelled data sources might be available.
This raises the risk of a significant distribution mismatch between the labelled and unlabelled datasets.
arXiv Detail & Related papers (2022-03-01T02:46:00Z) - Anomaly Detection via Reverse Distillation from One-Class Embedding [2.715884199292287]
We propose a novel T-S model consisting of a teacher encoder and a student decoder.
Instead of receiving raw images directly, the student network takes teacher model's one-class embedding as input.
In addition, we introduce a trainable one-class bottleneck embedding module in our T-S model.
arXiv Detail & Related papers (2022-01-26T01:48:37Z) - Learning class prototypes from Synthetic InSAR with Vision Transformers [2.41710192205034]
Detection of early signs of volcanic unrest is critical for assessing volcanic hazard.
We propose a novel deep learning methodology that exploits a rich source of synthetically generated interferograms.
We report detection accuracy that surpasses the state of the art on volcanic unrest detection.
arXiv Detail & Related papers (2022-01-09T14:03:00Z) - Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised
Learning [1.269104766024433]
We train an ensemble model of multiple UNet architectures with available high and low confidence labeled data.
This assimilated dataset is used for the next round of training ensemble models.
Our approach sets a high score on the public leaderboard for the ETCI competition with 0.7654 IoU.
arXiv Detail & Related papers (2021-07-18T05:42:10Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Neural Networks Are More Productive Teachers Than Human Raters: Active
Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model [57.41841346459995]
We study how to train a student deep neural network for visual recognition by distilling knowledge from a blackbox teacher model in a data-efficient manner.
We propose an approach that blends mixup and active learning.
arXiv Detail & Related papers (2020-03-31T05:44:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.