A Light-weight Deep Learning Model for Remote Sensing Image
Classification
- URL: http://arxiv.org/abs/2302.13028v1
- Date: Sat, 25 Feb 2023 09:02:01 GMT
- Title: A Light-weight Deep Learning Model for Remote Sensing Image
Classification
- Authors: Lam Pham, Cam Le, Dat Ngo, Anh Nguyen, Jasmin Lampert, Alexander
Schindler, Ian McLoughlin
- Abstract summary: We present a high-performance and light-weight deep learning model for Remote Sensing Image Classification (RSIC)
By conducting extensive experiments on the NWPU-RESISC45 benchmark, our proposed teacher-student models outperforms the state-of-the-art systems.
- Score: 70.66164876551674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a high-performance and light-weight deep learning
model for Remote Sensing Image Classification (RSIC), the task of identifying
the aerial scene of a remote sensing image. To this end, we first valuate
various benchmark convolutional neural network (CNN) architectures: MobileNet
V1/V2, ResNet 50/151V2, InceptionV3/InceptionResNetV2, EfficientNet B0/B7,
DenseNet 121/201, ConNeXt Tiny/Large. Then, the best performing models are
selected to train a compact model in a teacher-student arrangement. The
knowledge distillation from the teacher aims to achieve high performance with
significantly reduced complexity. By conducting extensive experiments on the
NWPU-RESISC45 benchmark, our proposed teacher-student models outperforms the
state-of-the-art systems, and has potential to be applied on a wide rage of
edge devices.
Related papers
- Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Semantic Segmentation in Satellite Hyperspectral Imagery by Deep Learning [54.094272065609815]
We propose a lightweight 1D-CNN model, 1D-Justo-LiuNet, which outperforms state-of-the-art models in the hypespectral domain.
1D-Justo-LiuNet achieves the highest accuracy (0.93) with the smallest model size (4,563 parameters) among all tested models.
arXiv Detail & Related papers (2023-10-24T21:57:59Z) - A Robust and Low Complexity Deep Learning Model for Remote Sensing Image
Classification [1.9019295680940274]
We present a robust and low complexity deep learning model for Remote Sensing Image Classification (RSIC)
By conducting extensive experiments on the benchmark datasets NWPU-RESISC45, we achieve a robust and low-complexity model.
arXiv Detail & Related papers (2022-11-05T06:14:30Z) - Generative Adversarial Super-Resolution at the Edge with Knowledge
Distillation [1.3764085113103222]
Single-Image Super-Resolution can support robotic tasks in environments where a reliable visual stream is required.
We propose an efficient Generative Adversarial Network model for real-time Super-Resolution, called EdgeSRGAN.
arXiv Detail & Related papers (2022-09-07T10:58:41Z) - Classification of Astronomical Bodies by Efficient Layer Fine-Tuning of
Deep Neural Networks [0.0]
The SDSS-IV dataset contains information about various astronomical bodies such as Galaxies, Stars, and Quasars captured by observatories.
Inspired by our work on deep multimodal learning, we further extended our research in the fine tuning of these architectures to study the effect in the classification scenario.
arXiv Detail & Related papers (2022-05-14T20:08:19Z) - Efficient deep learning models for land cover image classification [0.29748898344267777]
This work experiments with the BigEarthNet dataset for land use land cover (LULC) image classification.
We benchmark different state-of-the-art models, including Convolution Neural Networks, Multi-Layer Perceptrons, Visual Transformers, EfficientNets and Wide Residual Networks (WRN)
Our proposed lightweight model has an order of magnitude less trainable parameters, achieves 4.5% higher averaged f-score classification accuracy for all 19 LULC classes and is trained two times faster with respect to a ResNet50 state-of-the-art model that we use as a baseline.
arXiv Detail & Related papers (2021-11-18T00:03:14Z) - DisCo: Remedy Self-supervised Learning on Lightweight Models with
Distilled Contrastive Learning [94.89221799550593]
Self-supervised representation learning (SSL) has received widespread attention from the community.
Recent research argue that its performance will suffer a cliff fall when the model size decreases.
We propose a simple yet effective Distilled Contrastive Learning (DisCo) to ease the issue by a large margin.
arXiv Detail & Related papers (2021-04-19T08:22:52Z) - Gaussian RAM: Lightweight Image Classification via Stochastic
Retina-Inspired Glimpse and Reinforcement Learning [29.798579906253696]
We propose a reinforcement learning based lightweight deep neural network for large scale image classification.
We evaluate the model on cluttered MNIST, Large CIFAR-10 and Large CIFAR-100 datasets.
arXiv Detail & Related papers (2020-11-12T04:27:06Z) - PV-NAS: Practical Neural Architecture Search for Video Recognition [83.77236063613579]
Deep neural networks for video tasks is highly customized and the design of such networks requires domain experts and costly trial and error tests.
Recent advance in network architecture search has boosted the image recognition performance in a large margin.
In this study, we propose a practical solution, namely Practical Video Neural Architecture Search (PV-NAS)
arXiv Detail & Related papers (2020-11-02T08:50:23Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.