SEnSeI: A Deep Learning Module for Creating Sensor Independent Cloud
Masks
- URL: http://arxiv.org/abs/2111.08349v1
- Date: Tue, 16 Nov 2021 10:47:10 GMT
- Title: SEnSeI: A Deep Learning Module for Creating Sensor Independent Cloud
Masks
- Authors: Alistair Francis, John Mrziglod, Panagiotis Sidiropoulos, Jan-Peter
Muller
- Abstract summary: We introduce a novel neural network architecture -- Spectral ENcoder for SEnsor Independence (SEnSeI)
We focus on the problem of cloud masking, using several pre-existing datasets, and a new, freely available dataset for Sentinel-2.
Our model is shown to achieve state-of-the-art performance on the satellites it was trained on (Sentinel-2 and Landsat 8), and is able to extrapolate to sensors it has not seen during training such as Landsat 7, Per'uSat-1, and Sentinel-3 SLSTR.
- Score: 0.7340845393655052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel neural network architecture -- Spectral ENcoder for
SEnsor Independence (SEnSeI) -- by which several multispectral instruments,
each with different combinations of spectral bands, can be used to train a
generalised deep learning model. We focus on the problem of cloud masking,
using several pre-existing datasets, and a new, freely available dataset for
Sentinel-2. Our model is shown to achieve state-of-the-art performance on the
satellites it was trained on (Sentinel-2 and Landsat 8), and is able to
extrapolate to sensors it has not seen during training such as Landsat 7,
Per\'uSat-1, and Sentinel-3 SLSTR. Model performance is shown to improve when
multiple satellites are used in training, approaching or surpassing the
performance of specialised, single-sensor models. This work is motivated by the
fact that the remote sensing community has access to data taken with a hugely
variety of sensors. This has inevitably led to labelling efforts being
undertaken separately for different sensors, which limits the performance of
deep learning models, given their need for huge training sets to perform
optimally. Sensor independence can enable deep learning models to utilise
multiple datasets for training simultaneously, boosting performance and making
them much more widely applicable. This may lead to deep learning approaches
being used more frequently for on-board applications and in ground segment data
processing, which generally require models to be ready at launch or soon
afterwards.
Related papers
- Cross-sensor self-supervised training and alignment for remote sensing [2.1178416840822027]
We introduce cross-sensor self-supervised training and alignment for remote sensing (X-STARS)
X-STARS can be applied to train models from scratch, or adapt large models pretrained on e.g low-resolution data to new high-resolution sensors.
We demonstrate that X-STARS outperforms the state-of-the-art by a significant margin with less data across various conditions of data availability and resolutions.
arXiv Detail & Related papers (2024-05-16T09:25:45Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - USat: A Unified Self-Supervised Encoder for Multi-Sensor Satellite
Imagery [5.671254904219855]
We develop a new encoder architecture called USat that can input multi-spectral data from multiple sensors for self-supervised pre-training.
We integrate USat into a Masked Autoencoder (MAE) self-supervised pre-training procedure and find that a pre-trained USat outperforms state-of-the-art MAE models trained on remote sensing data.
arXiv Detail & Related papers (2023-12-02T19:17:04Z) - FedOpenHAR: Federated Multi-Task Transfer Learning for Sensor-Based
Human Activity Recognition [0.0]
This paper explores Federated Transfer Learning in a Multi-Task manner for both sensor-based human activity recognition and device position identification tasks.
The OpenHAR framework is used to train the models, which contains ten smaller datasets.
By utilizing transfer learning and training a task-specific and personalized federated model, we obtained a similar accuracy with training each client individually and higher accuracy than a fully centralized approach.
arXiv Detail & Related papers (2023-11-13T21:31:07Z) - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Learning Sentinel-2 reflectance dynamics for data-driven assimilation
and forecasting [11.0735899248545]
We train a deep learning model inspired from the Koopman operator theory to model long-term reflectance dynamics in an unsupervised way.
We show that this trained model, being differentiable, can be used as a prior for data assimilation in a straightforward way.
arXiv Detail & Related papers (2023-05-05T10:04:03Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Transformer-Based Behavioral Representation Learning Enables Transfer
Learning for Mobile Sensing in Small Datasets [4.276883061502341]
We provide a neural architecture framework for mobile sensing data that can learn generalizable feature representations from time series.
This architecture combines benefits from CNN and Trans-former architectures to enable better prediction performance.
arXiv Detail & Related papers (2021-07-09T22:26:50Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.