Overhead-MNIST: Machine Learning Baselines for Image Classification
- URL: http://arxiv.org/abs/2107.00436v1
- Date: Thu, 1 Jul 2021 13:30:39 GMT
- Title: Overhead-MNIST: Machine Learning Baselines for Image Classification
- Authors: Erik Larsen, David Noever, Korey MacVittie and John Lilly
- Abstract summary: Twenty-three machine learning algorithms were trained then scored to establish baseline comparison metrics.
The Overhead-MNIST dataset is a collection of satellite images similar in style to the ubiquitous MNIST hand-written digits.
We present results for the overall best performing algorithm as a baseline for edge deployability and future performance improvement.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Twenty-three machine learning algorithms were trained then scored to
establish baseline comparison metrics and to select an image classification
algorithm worthy of embedding into mission-critical satellite imaging systems.
The Overhead-MNIST dataset is a collection of satellite images similar in style
to the ubiquitous MNIST hand-written digits found in the machine learning
literature. The CatBoost classifier, Light Gradient Boosting Machine, and
Extreme Gradient Boosting models produced the highest accuracies, Areas Under
the Curve (AUC), and F1 scores in a PyCaret general comparison. Separate
evaluations showed that a deep convolutional architecture was the most
promising. We present results for the overall best performing algorithm as a
baseline for edge deployability and future performance improvement: a
convolutional neural network (CNN) scoring 0.965 categorical accuracy on unseen
test data.
Related papers
- Dual-branch PolSAR Image Classification Based on GraphMAE and Local Feature Extraction [22.39266854681996]
We propose a dual-branch classification model based on generative self-supervised learning in this paper.
The first branch is a superpixel-branch, which learns superpixel-level polarimetric representations using a generative self-supervised graph masked autoencoder.
To acquire finer classification results, a convolutional neural networks-based pixel-branch is further incorporated to learn pixel-level features.
arXiv Detail & Related papers (2024-08-08T08:17:50Z) - ELFIS: Expert Learning for Fine-grained Image Recognition Using Subsets [6.632855264705276]
We propose ELFIS, an expert learning framework for Fine-Grained Visual Recognition.
A set of neural networks-based experts are trained focusing on the meta-categories and are integrated into a multi-task framework.
Experiments show improvements in the SoTA FGVR benchmarks of up to +1.3% of accuracy using both CNNs and transformer-based networks.
arXiv Detail & Related papers (2023-03-16T12:45:19Z) - RankDNN: Learning to Rank for Few-shot Learning [70.49494297554537]
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification.
It provides a new perspective on few-shot learning and is complementary to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-28T13:59:31Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - GNNRank: Learning Global Rankings from Pairwise Comparisons via Directed
Graph Neural Networks [68.61934077627085]
We introduce GNNRank, a modeling framework compatible with any GNN capable of learning digraph embeddings.
We show that our methods attain competitive and often superior performance compared with existing approaches.
arXiv Detail & Related papers (2022-02-01T04:19:50Z) - An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy
Image Compression Systems [73.48927855855219]
Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.
In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms.
arXiv Detail & Related papers (2022-01-27T19:47:51Z) - Reinforcement Learning Based Handwritten Digit Recognition with
Two-State Q-Learning [1.8782750537161614]
We present a Hybrid approach based on Deep Learning and Reinforcement Learning.
Q-Learning is used with two Q-states and four actions.
Our approach outperforms other contemporary techniques like AlexNet, CNN-Nearest Neighbor and CNNSupport Vector Machine.
arXiv Detail & Related papers (2020-06-28T14:23:36Z) - Patch Based Classification of Remote Sensing Data: A Comparison of
2D-CNN, SVM and NN Classifiers [0.0]
We compare performance of patch based SVM and NN with that of a deep learning algorithms comprising of 2D-CNN and fully connected layers.
Results with both datasets suggest the effectiveness of patch based SVM and NN.
arXiv Detail & Related papers (2020-06-21T11:07:37Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.