Classifying High-Energy Celestial Objects with Machine Learning Methods
- URL: http://arxiv.org/abs/2512.11162v1
- Date: Thu, 11 Dec 2025 22:57:39 GMT
- Title: Classifying High-Energy Celestial Objects with Machine Learning Methods
- Authors: Alexis Mathis, Daniel Yu, Nolan Faught, Tyrian Hobbs.,
- Abstract summary: In astronomy, tree-based models and simple neural networks have recently garnered attention as a means of classifying celestial objects based on photometric data.<n>We apply common tree-based models to assess performance of these models for discriminating objects with similar photometric signals, pulsars and black holes.<n>We also train a RNN on a downsampled and normalized version of the raw signal data to examine its potential as a model capable of object discrimination and classification in real-time.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is a field that has been growing in importance since the early 2010s due to the increasing accuracy of classification models and hardware advances that have enabled faster training on large datasets. In the field of astronomy, tree-based models and simple neural networks have recently garnered attention as a means of classifying celestial objects based on photometric data. We apply common tree-based models to assess performance of these models for discriminating objects with similar photometric signals, pulsars and black holes. We also train a RNN on a downsampled and normalized version of the raw signal data to examine its potential as a model capable of object discrimination and classification in real-time.
Related papers
- Simulation-Based Pretraining and Domain Adaptation for Astronomical Time Series with Minimal Labeled Data [0.12744523252873352]
We present a pre-training approach that leverages simulations, significantly reducing the need for labeled examples from real observations.<n>Our models, trained on simulated data from multiple astronomical surveys (ZTF and LSST), learn generalizable representations that transfer effectively to downstream tasks.<n>Remarkably, our models exhibit effective zero-shot transfer capabilities, achieving comparable performance on future telescope (LSST) simulations when trained solely on existing telescope (ZTF) data.
arXiv Detail & Related papers (2025-10-14T20:07:14Z) - A self-regulated convolutional neural network for classifying variable stars [1.0485739694839669]
Machine learning models have proven effective in classifying variable stars.<n>They require high-quality, representative data and a large number of labelled samples for each star type to generalise well.<n>This challenge often leads to models learning and reinforcing biases inherent in the training data.<n>We propose a new approach to improve the reliability of classifiers in variable star classification by introducing a self-regulated training process.
arXiv Detail & Related papers (2025-05-20T20:09:24Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - Multi-layer Representation Learning for Robust OOD Image Classification [3.1372269816123994]
We argue that extracting features from a CNN's intermediate layers can assist in the model's final prediction.
Specifically, we adapt the Hypercolumns method to a ResNet-18 and find a significant increase in the model's accuracy, when evaluating on the NICO dataset.
arXiv Detail & Related papers (2022-07-27T17:46:06Z) - Improving Astronomical Time-series Classification via Data Augmentation
with Generative Adversarial Networks [1.2891210250935146]
We propose a data augmentation methodology based on Generative Adrial Networks (GANs) to generate a variety of synthetic light curves from variable stars.
The classification accuracy of variable stars is improved significantly when training with synthetic data and testing with real data.
arXiv Detail & Related papers (2022-05-13T16:39:54Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.