Explainable classification of astronomical uncertain time series
- URL: http://arxiv.org/abs/2210.00869v1
- Date: Wed, 28 Sep 2022 09:06:42 GMT
- Title: Explainable classification of astronomical uncertain time series
- Authors: Michael Franklin Mbouopda (LIMOS, UCA), Emille E O Ishida (LPC, UCA),
Engelbert Mephu Nguifo (LIMOS, UCA), Emmanuel Gangler (LPC, UCA)
- Abstract summary: We propose an uncertaintyaware subsequence based model which achieves a classification comparable to that of state-of-the-art methods.
Our approach is explainable-by-design, giving domain experts the ability to inspect the model and explain its predictions.
The dataset, the source code of our experiment, and the results are made available on a public repository.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploring the expansion history of the universe, understanding its
evolutionary stages, and predicting its future evolution are important goals in
astrophysics. Today, machine learning tools are used to help achieving these
goals by analyzing transient sources, which are modeled as uncertain time
series. Although black-box methods achieve appreciable performance, existing
interpretable time series methods failed to obtain acceptable performance for
this type of data. Furthermore, data uncertainty is rarely taken into account
in these methods. In this work, we propose an uncertaintyaware subsequence
based model which achieves a classification comparable to that of
state-of-the-art methods. Unlike conformal learning which estimates model
uncertainty on predictions, our method takes data uncertainty as additional
input. Moreover, our approach is explainable-by-design, giving domain experts
the ability to inspect the model and explain its predictions. The
explainability of the proposed method has also the potential to inspire new
developments in theoretical astrophysics modeling by suggesting important
subsequences which depict details of light curve shapes. The dataset, the
source code of our experiment, and the results are made available on a public
repository.
Related papers
- Learning from Uncertain Data: From Possible Worlds to Possible Models [13.789554282826835]
We introduce an efficient method for learning linear models from uncertain data, where uncertainty is represented as a set of possible variations in the data.
We compactly represent these dataset variations, enabling the symbolic execution of gradient descent on all possible worlds simultaneously.
Our method provides sound over-approximations of all possible optimal models and viable prediction ranges.
arXiv Detail & Related papers (2024-05-28T19:36:55Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Machine-Learning Solutions for the Analysis of Single-Particle Diffusion
Trajectories [0.0]
We provide an overview over recently introduced methods in machine-learning for diffusive time series.
We focus on means to include uncertainty estimates and feature-based approaches, both improving interpretability and providing concrete insight into the learning process of the machine.
arXiv Detail & Related papers (2023-08-18T09:29:29Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - RNN with Particle Flow for Probabilistic Spatio-temporal Forecasting [30.277213545837924]
Many classical statistical models often fall short in handling the complexity and high non-linearity present in time-series data.
In this work, we consider the time-series data as a random realization from a nonlinear state-space model.
We use particle flow as the tool for approximating the posterior distribution of the states, as it is shown to be highly effective in complex, high-dimensional settings.
arXiv Detail & Related papers (2021-06-10T21:49:23Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Remaining Useful Life Estimation Under Uncertainty with Causal GraphNets [0.0]
A novel approach for the construction and training of time series models is presented.
The proposed method is appropriate for constructing predictive models for non-stationary time series.
arXiv Detail & Related papers (2020-11-23T21:28:03Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.