Galaxy Morphology Classification using EfficientNet Architectures
- URL: http://arxiv.org/abs/2008.13611v2
- Date: Wed, 24 Mar 2021 04:53:52 GMT
- Title: Galaxy Morphology Classification using EfficientNet Architectures
- Authors: Shreyas Kalvankar, Hrushikesh Pandit, Pranav Parwate
- Abstract summary: We study the usage of EfficientNets and their applications to Galaxy Morphology Classification.
We explore the usage of EfficientNets into predicting the vote fractions of the 79,975 testing images from the Galaxy Zoo 2 challenge on Kaggle.
We propose a fine-tuned architecture using EfficientNetB5 to classify galaxies into seven classes - completely round smooth, in-between smooth, cigarshaped smooth, lenticular, barred spiral, unbarred spiral and irregular.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the usage of EfficientNets and their applications to Galaxy
Morphology Classification. We explore the usage of EfficientNets into
predicting the vote fractions of the 79,975 testing images from the Galaxy Zoo
2 challenge on Kaggle. We evaluate this model using the standard competition
metric i.e. rmse score and rank among the top 3 on the public leaderboard with
a public score of 0.07765. We propose a fine-tuned architecture using
EfficientNetB5 to classify galaxies into seven classes - completely round
smooth, in-between smooth, cigarshaped smooth, lenticular, barred spiral,
unbarred spiral and irregular. The network along with other popular
convolutional networks are used to classify 29,941 galaxy images. Different
metrics such as accuracy, recall, precision, F1 score are used to evaluate the
performance of the model along with a comparative study of other state of the
art convolutional models to determine which one performs the best. We obtain an
accuracy of 93.7% on our classification model with an F1 score of 0.8857.
EfficientNets can be applied to large scale galaxy classification in future
optical space surveys which will provide a large amount of data such as the
Large Synoptic Space Telescope.
Related papers
- Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement [4.197377031038214]
Few-Class Arena (FCA) is a unified benchmark for testing efficient image classification models for few classes.
FCA offers a new tool for efficient machine learning in the Few-Class Regime, with goals ranging from a new efficient class similarity proposal, to lightweight model architecture design, to a new scaling law.
arXiv Detail & Related papers (2024-11-02T01:31:47Z) - ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders [104.05133094625137]
We propose a fully convolutional masked autoencoder framework and a new Global Response Normalization layer.
This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets.
arXiv Detail & Related papers (2023-01-02T18:59:31Z) - EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for
Mobile Vision Applications [68.35683849098105]
We introduce split depth-wise transpose attention (SDTA) encoder that splits input tensors into multiple channel groups.
Our EdgeNeXt model with 1.3M parameters achieves 71.2% top-1 accuracy on ImageNet-1K.
Our EdgeNeXt model with 5.6M parameters achieves 79.4% top-1 accuracy on ImageNet-1K.
arXiv Detail & Related papers (2022-06-21T17:59:56Z) - ZARTS: On Zero-order Optimization for Neural Architecture Search [94.41017048659664]
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency.
This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation.
In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue.
arXiv Detail & Related papers (2021-10-10T09:35:15Z) - Greedy Network Enlarging [53.319011626986004]
We propose a greedy network enlarging method based on the reallocation of computations.
With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs.
With application of our method on GhostNet, we achieve state-of-the-art 80.9% and 84.3% ImageNet top-1 accuracies.
arXiv Detail & Related papers (2021-07-31T08:36:30Z) - Overhead-MNIST: Machine Learning Baselines for Image Classification [0.0]
Twenty-three machine learning algorithms were trained then scored to establish baseline comparison metrics.
The Overhead-MNIST dataset is a collection of satellite images similar in style to the ubiquitous MNIST hand-written digits.
We present results for the overall best performing algorithm as a baseline for edge deployability and future performance improvement.
arXiv Detail & Related papers (2021-07-01T13:30:39Z) - Morphological classification of compact and extended radio galaxies
using convolutional neural networks and data augmentation techniques [0.0]
This work uses archival data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) to classify radio galaxies into four classes.
The model presented in this work is based on Convolutional Neural Networks (CNNs)
Our model classified selected classes of radio galaxy sources on an independent testing subset with an average of 96% for precision, recall, and F1 score.
arXiv Detail & Related papers (2021-07-01T11:53:18Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Morphological classification of astronomical images with limited
labelling [0.0]
We propose an effective semi-supervised approach for galaxy morphology classification task, based on active learning of adversarial autoencoder (AAE) model.
For a binary classification problem (top level question of Galaxy Zoo 2 decision tree) we achieved accuracy 93.1% on the test part with only 0.86 millions markup actions.
Our best model with additional markup accuracy of 95.5%.
arXiv Detail & Related papers (2021-04-27T19:26:27Z) - Robust Pollen Imagery Classification with Generative Modeling and Mixup
Training [0.0]
We present a robust deep learning framework that can generalize well for pollen grain aerobiological imagery classification.
We develop a convolutional neural network-based pollen grain classification approach and combine some of the best practices in deep learning for better generalization.
The proposed approach earned a fourth-place in the final rankings in the ICPR-2020 Pollen Grain Classification Challenge.
arXiv Detail & Related papers (2021-02-25T19:39:24Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.