Finding active galactic nuclei through Fink
- URL: http://arxiv.org/abs/2211.10987v1
- Date: Sun, 20 Nov 2022 14:24:15 GMT
- Title: Finding active galactic nuclei through Fink
- Authors: Etienne Russeil, Emille E. O. Ishida, Roman Le Montagner, Julien
Peloton, Anais Moller
- Abstract summary: We present the Active Galactic Nuclei (AGN) classifier as currently implemented within the Fink broker.
Features were built upon summary statistics of available photometric points, as well as color estimation enabled by symbolic regression.
Using this method to classify real alerts from the Zwicky Transient Facility (ZTF), we achieved 98.0% accuracy, 93.8% precision and 88.5% recall.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the Active Galactic Nuclei (AGN) classifier as currently
implemented within the Fink broker. Features were built upon summary statistics
of available photometric points, as well as color estimation enabled by
symbolic regression. The learning stage includes an active learning loop, used
to build an optimized training sample from labels reported in astronomical
catalogs. Using this method to classify real alerts from the Zwicky Transient
Facility (ZTF), we achieved 98.0% accuracy, 93.8% precision and 88.5% recall.
We also describe the modifications necessary to enable processing data from the
upcoming Vera C. Rubin Observatory Large Survey of Space and Time (LSST), and
apply them to the training sample of the Extended LSST Astronomical Time-series
Classification Challenge (ELAsTiCC). Results show that our designed feature
space enables high performances of traditional machine learning algorithms in
this binary classification task.
Related papers
- ORACLE: A Real-Time, Hierarchical, Deep-Learning Photometric Classifier for the LSST [0.3276793654637396]
We present ORACLE, the first hierarchical deep-learning model for real-time, context-aware classification of transient and variable astrophysical phenomena.
ORACLE is a recurrent neural network with Gated Recurrent Units (GRUs)
Training on $sim$0.5M events from the Extended LSST Astronomical Time-Series Classification Challenge, we achieve a top-level (Transient vs Variable) macro-averaged precision of 0.96 using only 1 day of photometric observations.
arXiv Detail & Related papers (2025-01-02T19:00:05Z) - Detecting and Classifying Flares in High-Resolution Solar Spectra with Supervised Machine Learning [0.0]
We present a standardized procedure to classify solar flares with the aid of supervised machine learning.
Using flare data from the RHESSI mission and solar spectra from the HARPS-N instrument, we trained several supervised machine learning models.
The best-trained model achieves an average aggregate accuracy score of 0.65, and categorical accuracy scores of over 0.70 for the no-flare and weak-flare classes.
arXiv Detail & Related papers (2024-06-21T18:52:03Z) - Co-training for Low Resource Scientific Natural Language Inference [65.37685198688538]
We propose a novel co-training method that assigns weights based on the training dynamics of the classifiers to the distantly supervised labels.
By assigning importance weights instead of filtering out examples based on an arbitrary threshold on the predicted confidence, we maximize the usage of automatically labeled data.
The proposed method obtains an improvement of 1.5% in Macro F1 over the distant supervision baseline, and substantial improvements over several other strong SSL baselines.
arXiv Detail & Related papers (2024-06-20T18:35:47Z) - Deep Learning and LLM-based Methods Applied to Stellar Lightcurve Classification [7.592813175419603]
We present a comprehensive evaluation of deep-learning and large language model (LLM) based models for the automatic classification of variable star light curves.
Special emphasis is placed on Cepheids, RR Lyrae, and eclipsing binaries, examining the influence of observational cadence and phase distribution on classification precision.
We unveil StarWhisper LightCurve (LC), an innovative Series comprising three LLM-based models: LLM, multimodal large language model (MLLM), and Large Audio Language Model (LALM)
arXiv Detail & Related papers (2024-04-16T17:35:25Z) - TOAST: Transfer Learning via Attention Steering [77.83191769502763]
Current transfer learning methods often fail to focus on task-relevant features.
We introduce Top-Down Attention Steering (TOAST), a novel transfer learning algorithm that steers the attention to task-specific features.
TOAST substantially improves performance across a range of fine-grained visual classification datasets.
arXiv Detail & Related papers (2023-05-24T20:03:04Z) - SNGuess: A method for the selection of young extragalactic transients [0.0]
This paper presents SNGuess, a model designed to find young extragalactic nearby transients with high purity.
SNGuess works with a set of features that can be efficiently calculated from astronomical alert data.
The core model of SNGuess consists of an ensemble of decision trees, which are trained via gradient boosting.
arXiv Detail & Related papers (2022-08-13T00:11:46Z) - Contextualized Spatio-Temporal Contrastive Learning with
Self-Supervision [106.77639982059014]
We present ConST-CL framework to effectively learn-temporally fine-grained representations.
We first design a region-based self-supervised task which requires the model to learn to transform instance representations from one view to another guided by context features.
We then introduce a simple design that effectively reconciles the simultaneous learning of both holistic and local representations.
arXiv Detail & Related papers (2021-12-09T19:13:41Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - MANTRA: A Machine Learning reference lightcurve dataset for astronomical
transient event recognition [2.208166456405677]
We provide public access to a dataset of 4869 transient and 71207 non-transient object lightcurves built from the Catalina Real Time Transient Survey.
Some of the classes included in the dataset are: supernovae, cataclysmic variables, active galactic nuclei, high proper motion stars, blazars and flares.
We assess quantitative performance in two classification tasks: binary (transient/non-transient) and eight-class classification.
arXiv Detail & Related papers (2020-06-23T17:06:49Z) - Learning Delicate Local Representations for Multi-Person Pose Estimation [77.53144055780423]
We propose a novel method called Residual Steps Network (RSN)
RSN aggregates features with the same spatial size (Intra-level features) efficiently to obtain delicate local representations.
Our approach won the 1st place of COCO Keypoint Challenge 2019.
arXiv Detail & Related papers (2020-03-09T10:40:49Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.