Utilizing a Novel Deep Learning Method for Scene Categorization in Remote Sensing Data
- URL: http://arxiv.org/abs/2506.22939v1
- Date: Sat, 28 Jun 2025 16:12:28 GMT
- Title: Utilizing a Novel Deep Learning Method for Scene Categorization in Remote Sensing Data
- Authors: Ghufran A. Omran, Wassan Saad Abduljabbar Hayale, Ahmad AbdulQadir AlRababah, Israa Ibraheem Al-Barazanchi, Ravi Sekhar, Pritesh Shah, Sushma Parihar, Harshavardhan Reddy Penubadi,
- Abstract summary: This file introduces an innovative technique referred to as the Cuttlefish Optimized Bidirectional Recurrent Neural Network (CO- BRNN) for type of scenes in remote sensing data.<n>The investigation compares the execution of CO-BRNN with current techniques, including Multilayer Perceptron- Convolutional Neural Network (MLP-CNN), Convolutional Neural Network-Long Short Term Memory (CNN-LSTM), and Long Short Term Memory-Conditional Random Field (LSTM-CRF), Graph-Based (GB), Multilabel Image Retrieval Model (MIRM-CF), and Convolutional
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scene categorization (SC) in remotely acquired images is an important subject with broad consequences in different fields, including catastrophe control, ecological observation, architecture for cities, and more. Nevertheless, its several apps, reaching a high degree of accuracy in SC from distant observation data has demonstrated to be difficult. This is because traditional conventional deep learning models require large databases with high variety and high levels of noise to capture important visual features. To address these problems, this investigation file introduces an innovative technique referred to as the Cuttlefish Optimized Bidirectional Recurrent Neural Network (CO- BRNN) for type of scenes in remote sensing data. The investigation compares the execution of CO-BRNN with current techniques, including Multilayer Perceptron- Convolutional Neural Network (MLP-CNN), Convolutional Neural Network-Long Short Term Memory (CNN-LSTM), and Long Short Term Memory-Conditional Random Field (LSTM-CRF), Graph-Based (GB), Multilabel Image Retrieval Model (MIRM-CF), Convolutional Neural Networks Data Augmentation (CNN-DA). The results demonstrate that CO-BRNN attained the maximum accuracy of 97%, followed by LSTM-CRF with 90%, MLP-CNN with 85%, and CNN-LSTM with 80%. The study highlights the significance of physical confirmation to ensure the efficiency of satellite data.
Related papers
- I Can't Believe It's Not Real: CV-MuSeNet: Complex-Valued Multi-Signal Segmentation [2.8057339957917673]
Cognitive radio systems enable dynamic spectrum access with the aid of recent innovations in neural networks.<n>Traditional real-valued neural networks (RVNNs) face difficulties in low signal-to-noise ratio (SNR) environments.<n>This work presents CMuSeNet, a complex-valued multi-signal segmentation network for wideband spectrum sensing.
arXiv Detail & Related papers (2025-05-21T20:08:02Z) - Physical Rule-Guided Convolutional Neural Network [0.0]
Physics-Guided Neural Networks (PGNNs) have emerged to address limitations by integrating scientific principles and real-world knowledge.
This paper proposes a novel Physics-Guided CNN (PGCNN) architecture that incorporates dynamic, trainable, and automated LLM-generated, widely recognized rules integrated into the model as custom layers.
The PGCNN is evaluated on multiple datasets, demonstrating superior performance compared to a baseline CNN model.
arXiv Detail & Related papers (2024-09-03T17:32:35Z) - Ultra-low Latency Adaptive Local Binary Spiking Neural Network with
Accuracy Loss Estimator [4.554628904670269]
We propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators.
Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy.
arXiv Detail & Related papers (2022-07-31T09:03:57Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - SpikeMS: Deep Spiking Neural Network for Motion Segmentation [7.491944503744111]
textitSpikeMS is the first deep encoder-decoder SNN architecture for the real-world large-scale problem of motion segmentation.
We show that textitSpikeMS is capable of textitincremental predictions, or predictions from smaller amounts of test data than it is trained on.
arXiv Detail & Related papers (2021-05-13T21:34:55Z) - Towards Extremely Compact RNNs for Video Recognition with Fully
Decomposed Hierarchical Tucker Structure [41.41516453160845]
We propose to develop extremely compact RNN models with fully decomposed hierarchical Tucker (FDHT) structure.
Our experimental results on several popular video recognition datasets show that our proposed fully decomposed hierarchical tucker-based LSTM is extremely compact and highly efficient.
arXiv Detail & Related papers (2021-04-12T18:40:44Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Centimeter-Level Indoor Localization using Channel State Information
with Recurrent Neural Networks [12.193558591962754]
This paper proposes the neural network method to estimate the centimeter-level indoor positioning with real CSI data collected from linear antennas.
It utilizes an amplitude of channel response or a correlation matrix as the input, which can highly reduce the data size and suppress the noise.
Also, it makes use of the consistency in the user motion trajectory via Recurrent Neural Network (RNN) and signal-noise ratio (SNR) information, which can further improve the estimation accuracy.
arXiv Detail & Related papers (2020-02-04T17:10:18Z) - Disentangling Trainability and Generalization in Deep Neural Networks [45.15453323967438]
We analyze the spectrum of the Neural Tangent Kernel (NTK) for trainability and generalization across a range of networks.
We find that CNNs without global average pooling behave almost identically to FCNs, but that CNNs with pooling have markedly different and often better generalization performance.
arXiv Detail & Related papers (2019-12-30T18:53:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.