Scalable End-to-end Recurrent Neural Network for Variable star
classification
- URL: http://arxiv.org/abs/2002.00994v1
- Date: Mon, 3 Feb 2020 19:56:42 GMT
- Title: Scalable End-to-end Recurrent Neural Network for Variable star
classification
- Authors: Ignacio Becker, Karim Pichara, M\'arcio Catelan, Pavlos Protopapas,
Carlos Aguirre, Fatemeh Nikzat
- Abstract summary: We propose an end-to-end algorithm that automatically learns the representation of light curves that allows an accurate automatic classification.
Our method uses minimal data preprocessing, can be updated with a low computational cost for new observations and light curves, and can scale up to massive datasets.
- Score: 1.2722697496405464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During the last decade, considerable effort has been made to perform
automatic classification of variable stars using machine learning techniques.
Traditionally, light curves are represented as a vector of descriptors or
features used as input for many algorithms. Some features are computationally
expensive, cannot be updated quickly and hence for large datasets such as the
LSST cannot be applied. Previous work has been done to develop alternative
unsupervised feature extraction algorithms for light curves, but the cost of
doing so still remains high. In this work, we propose an end-to-end algorithm
that automatically learns the representation of light curves that allows an
accurate automatic classification. We study a series of deep learning
architectures based on Recurrent Neural Networks and test them in automated
classification scenarios. Our method uses minimal data preprocessing, can be
updated with a low computational cost for new observations and light curves,
and can scale up to massive datasets. We transform each light curve into an
input matrix representation whose elements are the differences in time and
magnitude, and the outputs are classification probabilities. We test our method
in three surveys: OGLE-III, Gaia and WISE. We obtain accuracies of about $95\%$
in the main classes and $75\%$ in the majority of subclasses. We compare our
results with the Random Forest classifier and obtain competitive accuracies
while being faster and scalable. The analysis shows that the computational
complexity of our approach grows up linearly with the light curve size, while
the traditional approach cost grows as $N\log{(N)}$.
Related papers
- Automated Sizing and Training of Efficient Deep Autoencoders using
Second Order Algorithms [0.46040036610482665]
We propose a multi-step training method for generalized linear classifiers.
validation error is minimized by pruning of unnecessary inputs.
desired outputs are improved via a method similar to the Ho-Kashyap rule.
arXiv Detail & Related papers (2023-08-11T16:48:31Z) - Resource saving taxonomy classification with k-mer distributions and
machine learning [2.0196229393131726]
We propose to use $k$-mer distributions obtained from DNA as features to classify its taxonomic origin.
We show that our approach improves the classification on the genus level and achieves comparable results for the superkingdom and phylum level.
arXiv Detail & Related papers (2023-03-10T08:01:08Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Towards Meta-learned Algorithm Selection using Implicit Fidelity
Information [13.750624267664156]
IMFAS produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost.
We show it is able to beat Successive Halving with at most half the fidelity sequence during test time.
arXiv Detail & Related papers (2022-06-07T09:14:24Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - A Deep Learning Based Ternary Task Classification System Using Gramian
Angular Summation Field in fNIRS Neuroimaging Data [0.15229257192293197]
Functional near-infrared spectroscopy (fNIRS) is a non-invasive, economical method used to study its blood flow pattern.
The proposed method converts the raw fNIRS time series data into an image using Gramian Angular Summation Field.
A Deep Convolutional Neural Network (CNN) based architecture is then used for task classification, including mental arithmetic, motor imagery, and idle state.
arXiv Detail & Related papers (2021-01-14T22:09:35Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z) - Imbalance Learning for Variable Star Classification [0.0]
We develop a hierarchical machine learning classification scheme to overcome imbalanced learning problems.
We use 'data-level' approaches to directly augment the training data so that they better describe under-represented classes.
We find that a higher classification rate is obtained when using $texttGpFit$ in the hierarchical model.
arXiv Detail & Related papers (2020-02-27T19:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.