Efficient deep learning models for land cover image classification
- URL: http://arxiv.org/abs/2111.09451v1
- Date: Thu, 18 Nov 2021 00:03:14 GMT
- Title: Efficient deep learning models for land cover image classification
- Authors: Ioannis Papoutsis, Nikolaos-Ioannis Bountos, Angelos Zavras, Dimitrios
Michail, Christos Tryfonopoulos
- Abstract summary: This work experiments with the BigEarthNet dataset for land use land cover (LULC) image classification.
We benchmark different state-of-the-art models, including Convolution Neural Networks, Multi-Layer Perceptrons, Visual Transformers, EfficientNets and Wide Residual Networks (WRN)
Our proposed lightweight model has an order of magnitude less trainable parameters, achieves 4.5% higher averaged f-score classification accuracy for all 19 LULC classes and is trained two times faster with respect to a ResNet50 state-of-the-art model that we use as a baseline.
- Score: 0.29748898344267777
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The availability of the sheer volume of Copernicus Sentinel imagery has
created new opportunities for land use land cover (LULC) mapping at large
scales using deep learning. Training on such large datasets though is a
non-trivial task. In this work we experiment with the BigEarthNet dataset for
LULC image classification and benchmark different state-of-the-art models,
including Convolution Neural Networks, Multi-Layer Perceptrons, Visual
Transformers, EfficientNets and Wide Residual Networks (WRN) architectures. Our
aim is to leverage classification accuracy, training time and inference rate.
We propose a framework based on EfficientNets for compound scaling of WRNs in
terms of network depth, width and input data resolution, for efficiently
training and testing different model setups. We design a novel scaled WRN
architecture enhanced with an Efficient Channel Attention mechanism. Our
proposed lightweight model has an order of magnitude less trainable parameters,
achieves 4.5% higher averaged f-score classification accuracy for all 19 LULC
classes and is trained two times faster with respect to a ResNet50
state-of-the-art model that we use as a baseline. We provide access to more
than 50 trained models, along with our code for distributed training on
multiple GPU nodes.
Related papers
- Effective pruning of web-scale datasets based on complexity of concept
clusters [48.125618324485195]
We present a method for pruning large-scale multimodal datasets for training CLIP-style models on ImageNet.
We find that training on a smaller set of high-quality data can lead to higher performance with significantly lower training costs.
We achieve a new state-of-the-art Imagehttps://info.arxiv.org/help/prep#commentsNet zero-shot accuracy and a competitive average zero-shot accuracy on 38 evaluation tasks.
arXiv Detail & Related papers (2024-01-09T14:32:24Z) - Toward efficient resource utilization at edge nodes in federated learning [0.6990493129893112]
Federated learning enables edge nodes to collaboratively contribute to constructing a global model without sharing their data.
computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning applications.
We propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices.
arXiv Detail & Related papers (2023-09-19T07:04:50Z) - Dataset Quantization [72.61936019738076]
We present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets.
DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio.
arXiv Detail & Related papers (2023-08-21T07:24:29Z) - A Light-weight Deep Learning Model for Remote Sensing Image
Classification [70.66164876551674]
We present a high-performance and light-weight deep learning model for Remote Sensing Image Classification (RSIC)
By conducting extensive experiments on the NWPU-RESISC45 benchmark, our proposed teacher-student models outperforms the state-of-the-art systems.
arXiv Detail & Related papers (2023-02-25T09:02:01Z) - A Robust and Low Complexity Deep Learning Model for Remote Sensing Image
Classification [1.9019295680940274]
We present a robust and low complexity deep learning model for Remote Sensing Image Classification (RSIC)
By conducting extensive experiments on the benchmark datasets NWPU-RESISC45, we achieve a robust and low-complexity model.
arXiv Detail & Related papers (2022-11-05T06:14:30Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - Classification of Quasars, Galaxies, and Stars in the Mapping of the
Universe Multi-modal Deep Learning [0.0]
Fourth version the Sloan Digital Sky Survey (SDSS-4), Data Release 16 dataset was used to classify the SDSS dataset into galaxies, stars, and quasars using machine learning and deep learning architectures.
We build a novel multi-modal architecture and achieve state-of-the-art results.
arXiv Detail & Related papers (2022-05-22T05:17:31Z) - Classification of Astronomical Bodies by Efficient Layer Fine-Tuning of
Deep Neural Networks [0.0]
The SDSS-IV dataset contains information about various astronomical bodies such as Galaxies, Stars, and Quasars captured by observatories.
Inspired by our work on deep multimodal learning, we further extended our research in the fine tuning of these architectures to study the effect in the classification scenario.
arXiv Detail & Related papers (2022-05-14T20:08:19Z) - LilNetX: Lightweight Networks with EXtreme Model Compression and
Structured Sparsification [36.651329027209634]
LilNetX is an end-to-end trainable technique for neural networks.
It enables learning models with specified accuracy-rate-computation trade-off.
arXiv Detail & Related papers (2022-04-06T17:59:10Z) - Auto-Transfer: Learning to Route Transferrable Representations [77.30427535329571]
We propose a novel adversarial multi-armed bandit approach which automatically learns to route source representations to appropriate target representations.
We see upwards of 5% accuracy improvements compared with the state-of-the-art knowledge transfer methods.
arXiv Detail & Related papers (2022-02-02T13:09:27Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.