SeFEF: A Seizure Forecasting Evaluation Framework
- URL: http://arxiv.org/abs/2510.11275v1
- Date: Mon, 13 Oct 2025 11:10:27 GMT
- Title: SeFEF: A Seizure Forecasting Evaluation Framework
- Authors: Ana Sofia Carmo, Lourenço Abrunhosa Rodrigues, Ana Rita Peralta, Ana Fred, Carla Bentes, Hugo Plácido da Silva,
- Abstract summary: We introduce a Python-based framework aimed at streamlining the development, assessment, and documentation of seizure forecasting algorithms.<n>The framework automates data labeling, cross-validation splitting, forecast post-processing, performance evaluation, and reporting.<n>It supports various forecasting horizons and includes a model card that documents implementation details, training and evaluation settings, and performance metrics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The lack of standardization in seizure forecasting slows progress in the field and limits the clinical translation of forecasting models. In this work, we introduce a Python-based framework aimed at streamlining the development, assessment, and documentation of individualized seizure forecasting algorithms. The framework automates data labeling, cross-validation splitting, forecast post-processing, performance evaluation, and reporting. It supports various forecasting horizons and includes a model card that documents implementation details, training and evaluation settings, and performance metrics. Three different models were implemented as a proof-of-concept. The models leveraged features extracted from time series data and seizure periodicity. Model performance was assessed using time series cross-validation and key deterministic and probabilistic metrics. Implementation of the three models was successful, demonstrating the flexibility of the framework. The results also emphasize the importance of careful model interpretation due to variations in probability scaling, calibration, and subject-specific differences. Although formal usability metrics were not recorded, empirical observations suggest reduced development time and methodological consistency, minimizing unintentional variations that could affect the comparability of different approaches. As a proof-of-concept, this validation is inherently limited, relying on a single-user experiment without statistical analyses or replication across independent datasets. At this stage, our objective is to make the framework publicly available to foster community engagement, facilitate experimentation, and gather feedback. In the long term, we aim to contribute to the establishment of a consensus on a standardized methodology for the development and validation of seizure forecasting algorithms in people with epilepsy.
Related papers
- Beyond Model Ranking: Predictability-Aligned Evaluation for Time Series Forecasting [18.018179328110048]
We introduce a predictability-aligned diagnostic framework grounded in spectral coherence.<n>We provide the first systematic evidence of "predictability drift", demonstrating that a task's forecasting difficulty varies sharply over time.<n>Our evaluation reveals a key architectural trade-off: complex models are superior for low-predictability data, whereas linear models are highly effective on more predictable tasks.
arXiv Detail & Related papers (2025-09-27T02:56:06Z) - ChronosX: Adapting Pretrained Time Series Models with Exogenous Variables [30.679739751673655]
This paper introduces a new method to incorporate covariates into pretrained time series forecasting models.<n>Our proposed approach incorporates covariate information into pretrained forecasting models through modular blocks.<n>In evaluations on both synthetic and real datasets, our approach effectively incorporates covariate information into pretrained models, outperforming existing baselines.
arXiv Detail & Related papers (2025-03-15T12:34:19Z) - Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.<n>The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.<n>The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Evidential time-to-event prediction with calibrated uncertainty quantification [12.446406577462069]
Time-to-event analysis provides insights into clinical prognosis and treatment recommendations.<n>We propose an evidential regression model specifically designed for time-to-event prediction.<n>We show that our model delivers both accurate and reliable performance, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2024-11-12T15:06:04Z) - On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Towards a performance characteristic curve for model evaluation: an application in information diffusion prediction [2.8686437689115354]
We propose a metric based on information entropy to quantify the randomness in diffusion data.<n>We then identify a scaling pattern between the randomness and the prediction accuracy of the model.<n>The validity of the curve is tested by three prediction models in the same family.
arXiv Detail & Related papers (2023-09-18T07:32:57Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Robust Validation: Confident Predictions Even When Distributions Shift [19.327409270934474]
We describe procedures for robust predictive inference, where a model provides uncertainty estimates on its predictions rather than point predictions.
We present a method that produces prediction sets (almost exactly) giving the right coverage level for any test distribution in an $f$-divergence ball around the training population.
An essential component of our methodology is to estimate the amount of expected future data shift and build robustness to it.
arXiv Detail & Related papers (2020-08-10T17:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.