Deep Learning for In-Orbit Cloud Segmentation and Classification in
Hyperspectral Satellite Data
- URL: http://arxiv.org/abs/2403.08695v1
- Date: Wed, 13 Mar 2024 16:58:37 GMT
- Title: Deep Learning for In-Orbit Cloud Segmentation and Classification in
Hyperspectral Satellite Data
- Authors: Daniel Kovac, Jan Mucha, Jon Alvarez Justo, Jiri Mekyska, Zoltan
Galaz, Krystof Novotny, Radoslav Pitonak, Jan Knezik, Jonas Herec, Tor Arne
Johansen
- Abstract summary: This article explores the latest Convolutional Neural Networks (CNNs) for cloud detection aboard hyperspectral satellites.
The performance of the latest 1D CNN (1D-Justo-LiuNet) and two recent 2D CNNs (nnU-net and 2D-Justo-UNet-Simple) for cloud segmentation and classification is assessed.
- Score: 0.7574855592708002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article explores the latest Convolutional Neural Networks (CNNs) for
cloud detection aboard hyperspectral satellites. The performance of the latest
1D CNN (1D-Justo-LiuNet) and two recent 2D CNNs (nnU-net and
2D-Justo-UNet-Simple) for cloud segmentation and classification is assessed.
Evaluation criteria include precision and computational efficiency for in-orbit
deployment. Experiments utilize NASA's EO-1 Hyperion data, with varying
spectral channel numbers after Principal Component Analysis. Results indicate
that 1D-Justo-LiuNet achieves the highest accuracy, outperforming 2D CNNs,
while maintaining compactness with larger spectral channel sets, albeit with
increased inference times. However, the performance of 1D CNN degrades with
significant channel reduction. In this context, the 2D-Justo-UNet-Simple offers
the best balance for in-orbit deployment, considering precision, memory, and
time costs. While nnU-net is suitable for on-ground processing, deployment of
lightweight 1D-Justo-LiuNet is recommended for high-precision applications.
Alternatively, lightweight 2D-Justo-UNet-Simple is recommended for balanced
costs between timing and precision in orbit.
Related papers
- OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - Semantic Segmentation in Satellite Hyperspectral Imagery by Deep Learning [54.094272065609815]
We propose a lightweight 1D-CNN model, 1D-Justo-LiuNet, which outperforms state-of-the-art models in the hypespectral domain.
1D-Justo-LiuNet achieves the highest accuracy (0.93) with the smallest model size (4,563 parameters) among all tested models.
arXiv Detail & Related papers (2023-10-24T21:57:59Z) - Faster hyperspectral image classification based on selective kernel
mechanism using deep convolutional networks [18.644268589334217]
This letter designed the Faster selective kernel mechanism network (FSKNet), FSKNet can balance this problem.
It designs 3D-CNN and 2D-CNN conversion modules, using 3D-CNN to complete feature extraction while reducing the dimensionality of spatial and spectrum.
FSKNet achieves high accuracy on the IN, UP, Salinas, and Botswana data sets with very small parameters.
arXiv Detail & Related papers (2022-02-14T02:14:50Z) - Application of 2-D Convolutional Neural Networks for Damage Detection in
Steel Frame Structures [0.0]
We present an application of 2-D convolutional neural networks (2-D CNNs) designed to perform both feature extraction and classification stages.
The method uses a network of lighted CNNs instead of deep and takes raw acceleration signals as input.
arXiv Detail & Related papers (2021-10-29T16:29:31Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector
Representation for 3D Object Detection [100.60209139039472]
We propose the PointVoxel Region based Convolution Neural Networks (PVRCNNs) for accurate 3D detection from point clouds.
Our proposed PV-RCNNs significantly outperform previous state-of-the-art 3D detection methods on both the Open dataset and the highly-competitive KITTI benchmark.
arXiv Detail & Related papers (2021-01-31T14:51:49Z) - Hyperspectral Image Classification: Artifacts of Dimension Reduction on
Hybrid CNN [1.2875323263074796]
2D and 3D CNN models have proved highly efficient in exploiting the spatial and spectral information of Hyperspectral Images.
This work proposed a lightweight CNN (3D followed by 2D-CNN) model which significantly reduces the computational cost.
arXiv Detail & Related papers (2021-01-25T18:43:57Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - Precision Gating: Improving Neural Network Efficiency with Dynamic
Dual-Precision Activations [22.71924873981158]
Precision gating (PG) is an end-to-end trainable dynamic dual-precision quantization technique for deep neural networks.
PG achieves excellent results on CNNs, including statically compressed mobile-friendly networks such as ShuffleNet.
Compared to 8-bit uniform quantization, PG obtains a 1.2% improvement in perplexity per word with 2.7$times$ computational cost reduction on LSTM.
arXiv Detail & Related papers (2020-02-17T18:54:37Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.