Paying Attention to Astronomical Transients: Introducing the Time-series
Transformer for Photometric Classification
- URL: http://arxiv.org/abs/2105.06178v3
- Date: Wed, 4 Oct 2023 21:14:10 GMT
- Title: Paying Attention to Astronomical Transients: Introducing the Time-series
Transformer for Photometric Classification
- Authors: Tarek Allam Jr., Jason D. McEwen
- Abstract summary: We develop a new transformer architecture, first proposed for natural language processing.
We apply the time-series transformer to the task of photometric classification, minimising the reliance of expert domain knowledge.
We achieve a logarithmic-loss of 0.507 on imbalanced data in a representative setting using data from the Photometric LSST Astronomical Time-Series Classification Challenge.
- Score: 6.586394734694152
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Future surveys such as the Legacy Survey of Space and Time (LSST) of the Vera
C. Rubin Observatory will observe an order of magnitude more astrophysical
transient events than any previous survey before. With this deluge of
photometric data, it will be impossible for all such events to be classified by
humans alone. Recent efforts have sought to leverage machine learning methods
to tackle the challenge of astronomical transient classification, with ever
improving success. Transformers are a recently developed deep learning
architecture, first proposed for natural language processing, that have shown a
great deal of recent success. In this work we develop a new transformer
architecture, which uses multi-head self attention at its core, for general
multi-variate time-series data. Furthermore, the proposed time-series
transformer architecture supports the inclusion of an arbitrary number of
additional features, while also offering interpretability. We apply the
time-series transformer to the task of photometric classification, minimising
the reliance of expert domain knowledge for feature selection, while achieving
results comparable to state-of-the-art photometric classification methods. We
achieve a logarithmic-loss of 0.507 on imbalanced data in a representative
setting using data from the Photometric LSST Astronomical Time-Series
Classification Challenge (PLAsTiCC). Moreover, we achieve a micro-averaged
receiver operating characteristic area under curve of 0.98 and micro-averaged
precision-recall area under curve of 0.87.
Related papers
- PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting [82.03373838627606]
Self-attention mechanism in Transformer architecture requires positional embeddings to encode temporal order in time series prediction.
We argue that this reliance on positional embeddings restricts the Transformer's ability to effectively represent temporal sequences.
We present a model integrating PRE with a standard Transformer encoder, demonstrating state-of-the-art performance on various real-world datasets.
arXiv Detail & Related papers (2024-08-20T01:56:07Z) - MambaVT: Spatio-Temporal Contextual Modeling for robust RGB-T Tracking [51.28485682954006]
We propose a pure Mamba-based framework (MambaVT) to fully exploit intrinsic-temporal contextual modeling for robust visible-thermal tracking.
Specifically, we devise the long-range cross-frame integration component to globally adapt to target appearance variations.
Experiments show the significant potential of vision Mamba for RGB-T tracking, with MambaVT achieving state-of-the-art performance on four mainstream benchmarks.
arXiv Detail & Related papers (2024-08-15T02:29:00Z) - Real-time gravitational-wave inference for binary neutron stars using machine learning [71.29593576787549]
We present a machine learning framework that performs complete BNS inference in just one second without making any approximations.
Our approach enhances multi-messenger observations by providing (i) accurate localization even before the merger; (ii) improved localization precision by $sim30%$ compared to approximate low-latency methods; and (iii) detailed information on luminosity distance, inclination, and masses.
arXiv Detail & Related papers (2024-07-12T18:00:02Z) - FactoFormer: Factorized Hyperspectral Transformers with Self-Supervised
Pretraining [36.44039681893334]
Hyperspectral images (HSIs) contain rich spectral and spatial information.
Current state-of-the-art hyperspectral transformers only tokenize the input HSI sample along the spectral dimension.
We propose a novel factorized spectral-spatial transformer that incorporates factorized self-supervised pretraining procedures.
arXiv Detail & Related papers (2023-09-18T02:05:52Z) - ViT-Calibrator: Decision Stream Calibration for Vision Transformer [49.60474757318486]
We propose a new paradigm dubbed Decision Stream that boosts the performance of general Vision Transformers.
We shed light on the information propagation mechanism in the learning procedure by exploring the correlation between different tokens and the relevance coefficient of multiple dimensions.
arXiv Detail & Related papers (2023-04-10T02:40:24Z) - The Tiny Time-series Transformer: Low-latency High-throughput
Classification of Astronomical Transients using Deep Model Compression [4.960046610835999]
The upcoming Legacy Survey of Space and Time (LSST) will raise the big-data bar for time-domain astronomy.
We show how the use of modern deep compression methods can achieve a $18times$ reduction in model size.
We also show that in addition to the deep compression techniques, careful choice of file formats can improve inference latency.
arXiv Detail & Related papers (2023-03-15T21:46:35Z) - ViTs for SITS: Vision Transformers for Satellite Image Time Series [52.012084080257544]
We introduce a fully-attentional model for general Satellite Image Time Series (SITS) processing based on the Vision Transformer (ViT)
TSViT splits a SITS record into non-overlapping patches in space and time which are tokenized and subsequently processed by a factorized temporo-spatial encoder.
arXiv Detail & Related papers (2023-01-12T11:33:07Z) - W-Transformers : A Wavelet-based Transformer Framework for Univariate
Time Series Forecasting [7.075125892721573]
We build a transformer model for non-stationary time series using wavelet-based transformer encoder architecture.
We evaluate our framework on several publicly available benchmark time series datasets from various domains.
arXiv Detail & Related papers (2022-09-08T17:39:38Z) - Improving Astronomical Time-series Classification via Data Augmentation
with Generative Adversarial Networks [1.2891210250935146]
We propose a data augmentation methodology based on Generative Adrial Networks (GANs) to generate a variety of synthetic light curves from variable stars.
The classification accuracy of variable stars is improved significantly when training with synthetic data and testing with real data.
arXiv Detail & Related papers (2022-05-13T16:39:54Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z) - A Differential Attention Fusion Model Based on Transformer for Time
Series Forecasting [4.666618110838523]
Time series forecasting is widely used in the fields of equipment life cycle forecasting, weather forecasting, traffic flow forecasting, and other fields.
Some scholars have tried to apply Transformer to time series forecasting because of its powerful parallel training ability.
The existing Transformer methods do not pay enough attention to the small time segments that play a decisive role in prediction.
arXiv Detail & Related papers (2022-02-23T10:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.