Graph Neural Networks Extract High-Resolution Cultivated Land Maps from
Sentinel-2 Image Series
- URL: http://arxiv.org/abs/2208.02349v1
- Date: Wed, 3 Aug 2022 21:19:06 GMT
- Title: Graph Neural Networks Extract High-Resolution Cultivated Land Maps from
Sentinel-2 Image Series
- Authors: Lukasz Tulczyjew, Michal Kawulok, Nicolas Long\'ep\'e, Bertrand Le
Saux, Jakub Nalepa
- Abstract summary: We introduce an approach for extracting 2.5 m cultivated land maps from 10 m Sentinel-2 multispectral image series.
The experiments indicate that our models not only outperform classical and deep machine learning techniques through delivering higher-quality segmentation maps.
Such memory frugality is pivotal in the missions which allow us to uplink a model to the AI-powered satellite once it is in orbit.
- Score: 33.10103896300028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Maintaining farm sustainability through optimizing the agricultural
management practices helps build more planet-friendly environment. The emerging
satellite missions can acquire multi- and hyperspectral imagery which captures
more detailed spectral information concerning the scanned area, hence allows us
to benefit from subtle spectral features during the analysis process in
agricultural applications. We introduce an approach for extracting 2.5 m
cultivated land maps from 10 m Sentinel-2 multispectral image series which
benefits from a compact graph convolutional neural network. The experiments
indicate that our models not only outperform classical and deep machine
learning techniques through delivering higher-quality segmentation maps, but
also dramatically reduce the memory footprint when compared to U-Nets (almost
8k trainable parameters of our models, with up to 31M parameters of U-Nets).
Such memory frugality is pivotal in the missions which allow us to uplink a
model to the AI-powered satellite once it is in orbit, as sending large nets is
impossible due to the time constraints.
Related papers
- Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data [0.08192907805418582]
This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation.
One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (ViT) backbone.
The other branch captures complex-temporal dynamics from the Sentinel-2 satellite imageMax time series using a U-ViNet with Temporal Attention (U-TAE)
arXiv Detail & Related papers (2024-10-01T07:50:37Z) - Automated Linear Disturbance Mapping via Semantic Segmentation of Sentinel-2 Imagery [0.0]
Road, seismic exploration lines, and pipelines pose a significant threat to the boreal woodland caribou population.
This research employs a deep convolutional neural network model based on the VGGNet16 architecture for semantic segmentation of lower resolution (10m) Sentinel-2 satellite imagery.
The model is trained using ground-truth label maps sourced from the freely available Alberta Institute of Biodiversity Monitoring Human Footprint dataset.
arXiv Detail & Related papers (2024-09-19T14:42:12Z) - Semantic Segmentation in Satellite Hyperspectral Imagery by Deep Learning [54.094272065609815]
We propose a lightweight 1D-CNN model, 1D-Justo-LiuNet, which outperforms state-of-the-art models in the hypespectral domain.
1D-Justo-LiuNet achieves the highest accuracy (0.93) with the smallest model size (4,563 parameters) among all tested models.
arXiv Detail & Related papers (2023-10-24T21:57:59Z) - Multi-tiling Neural Radiance Field (NeRF) -- Geometric Assessment on Large-scale Aerial Datasets [5.391764618878545]
In this paper, we aim to scale the Neural Radiance Fields (NeRF) on large-scael aerial datasets.
Specifically, we introduce a location-specific sampling technique as well as a multi-camera tiling (MCT) strategy to reduce memory consumption.
We implement our method on a representative approach, Mip-NeRF, and compare its geometry performance with threephotgrammetric MVS pipelines.
arXiv Detail & Related papers (2023-10-01T00:21:01Z) - SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing
imagery using HRNet with Separable Convolution [3.717258819781834]
We propose a novel Deep learning approach that integrates HRNet with Separable Convolutional layers to capture spatial patterns and Self-attention to capture temporal patterns of the data.
The proposed algorithm achieves a high classification accuracy of 97.5% and IoU of 55.2% in generating crop maps.
arXiv Detail & Related papers (2023-07-11T18:07:25Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - SatMAE: Pre-training Transformers for Temporal and Multi-Spectral
Satellite Imagery [74.82821342249039]
We present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE)
To leverage temporal information, we include a temporal embedding along with independently masking image patches across time.
arXiv Detail & Related papers (2022-07-17T01:35:29Z) - Boundary Regularized Building Footprint Extraction From Satellite Images
Using Deep Neural Network [6.371173732947292]
We propose a novel deep neural network, which enables to jointly detect building instance and regularize noisy building boundary shapes from a single satellite imagery.
Our model can accomplish multi-tasks of object localization, recognition, semantic labelling and geometric shape extraction simultaneously.
arXiv Detail & Related papers (2020-06-23T17:24:09Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.