Coherent False Seizure Prediction in Epilepsy, Coincidence or
Providence?
- URL: http://arxiv.org/abs/2110.13550v1
- Date: Tue, 26 Oct 2021 10:25:14 GMT
- Title: Coherent False Seizure Prediction in Epilepsy, Coincidence or
Providence?
- Authors: Jens M\"uller, Hongliu Yang, Matthias Eberlein, Georg Leonhardt,
Ortrud Uckermann, Levin Kuhlmann, Ronald Tetzlaff
- Abstract summary: Seizure forecasting using machine learning is possible, but the performance is far from ideal.
Here, we examine false and missing alarms of two algorithms on long-term datasets.
- Score: 0.2770822269241973
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Seizure forecasting using machine learning is possible, but the performance
is far from ideal, as indicated by many false predictions and low specificity.
Here, we examine false and missing alarms of two algorithms on long-term
datasets to show that the limitations are less related to classifiers or
features, but rather to intrinsic changes in the data. We evaluated two
algorithms on three datasets by computing the correlation of false predictions
and estimating the information transfer between both classification methods.
For 9 out of 12 individuals both methods showed a performance better than
chance. For all individuals we observed a positive correlation in predictions.
For individuals with strong correlation in false predictions we were able to
boost the performance of one method by excluding test samples based on the
results of the second method. Substantially different algorithms exhibit a
highly consistent performance and a strong coherency in false and missing
alarms. Hence, changing the underlying hypothesis of a preictal state of fixed
time length prior to each seizure to a proictal state is more helpful than
further optimizing classifiers. The outcome is significant for the evaluation
of seizure prediction algorithms on continuous data.
Related papers
- Prediction with Incomplete Data under Agnostic Mask Distribution Shift [35.86200694774949]
We consider prediction with incomplete data in the presence of distribution shift.
We leverage the observation that for each mask, there is an invariant optimal predictor.
We propose a novel prediction method called StableMiss.
arXiv Detail & Related papers (2023-05-18T14:06:06Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - How to Evaluate Uncertainty Estimates in Machine Learning for
Regression? [1.4610038284393165]
We show that both approaches to evaluating the quality of uncertainty estimates have serious flaws.
Firstly, both approaches cannot disentangle the separate components that jointly create the predictive uncertainty.
Thirdly, the current approach to test prediction intervals directly has additional flaws.
arXiv Detail & Related papers (2021-06-07T07:47:46Z) - Towards optimally abstaining from prediction [22.937799541125607]
A common challenge across all areas of machine learning is that training data is not distributed like test data.
We consider a model where one may abstain from predicting, at a fixed cost.
Our work builds on a recent abstention algorithm of Goldwasser, Kalais, and Montasser ( 2020) for transductive binary classification.
arXiv Detail & Related papers (2021-05-28T21:44:48Z) - An algorithm-based multiple detection influence measure for high
dimensional regression using expectile [0.4999814847776096]
We propose an algorithm-based, multi-step, multiple detection procedure to identify influential observations.
Our three-step algorithm to identify and capture undesirable variability in the data, $asymMIP,$ is based on two complementary statistics.
The application of our method to the Autism Brain Imaging Data Exchange dataset resulted in a more balanced and accurate prediction of brain maturity.
arXiv Detail & Related papers (2021-05-26T01:16:24Z) - Continual Learning for Fake Audio Detection [62.54860236190694]
This paper proposes detecting fake without forgetting, a continual-learning-based method, to make the model learn new spoofing attacks incrementally.
Experiments are conducted on the ASVspoof 2019 dataset.
arXiv Detail & Related papers (2021-04-15T07:57:05Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.