fev-bench: A Realistic Benchmark for Time Series Forecasting
- URL: http://arxiv.org/abs/2509.26468v1
- Date: Tue, 30 Sep 2025 16:17:18 GMT
- Title: fev-bench: A Realistic Benchmark for Time Series Forecasting
- Authors: Oleksandr Shchur, Abdul Fatir Ansari, Caner Turkmen, Lorenzo Stella, Nick Erickson, Pablo Guerron, Michael Bohlke-Schneider, Yuyang Wang,
- Abstract summary: Existing benchmarks often have narrow domain coverage or overlook important real-world settings.<n>We propose fevbench, a benchmark comprising 100 forecasting tasks across seven domains.<n> fev-bench employs principled aggregation methods with bootstrapped confidence intervals to report model performance.
- Score: 19.931138737002215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benchmark quality is critical for meaningful evaluation and sustained progress in time series forecasting, particularly given the recent rise of pretrained models. Existing benchmarks often have narrow domain coverage or overlook important real-world settings, such as tasks with covariates. Additionally, their aggregation procedures often lack statistical rigor, making it unclear whether observed performance differences reflect true improvements or random variation. Many benchmarks also fail to provide infrastructure for consistent evaluation or are too rigid to integrate into existing pipelines. To address these gaps, we propose fev-bench, a benchmark comprising 100 forecasting tasks across seven domains, including 46 tasks with covariates. Supporting the benchmark, we introduce fev, a lightweight Python library for benchmarking forecasting models that emphasizes reproducibility and seamless integration with existing workflows. Usingfev, fev-bench employs principled aggregation methods with bootstrapped confidence intervals to report model performance along two complementary dimensions: win rates and skill scores. We report results on fev-bench for various pretrained, statistical and baseline models, and identify promising directions for future research.
Related papers
- Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols [123.73663884421272]
Few-shot transfer has been revolutionized by stronger pre-trained models and improved adaptation algorithms.<n>We establish FEWTRANS, a comprehensive benchmark containing 10 diverse datasets.<n>By releasing FEWTRANS, we aim to provide a rigorous "ruler" to streamline reproducible advances in few-shot transfer learning research.
arXiv Detail & Related papers (2026-02-28T05:41:57Z) - SimDiff: Simpler Yet Better Diffusion Model for Time Series Point Forecasting [8.141505251306622]
Diffusion models have recently shown promise in time series forecasting.<n>They often fail to achieve state-of-the-art point estimation performance.<n>We propose SimDiff, a single-stage, end-to-end framework for point estimation.
arXiv Detail & Related papers (2025-11-24T16:09:55Z) - Chronos-2: From Univariate to Universal Forecasting [52.753731922908905]
Chronos-2 is a pretrained model capable of handling univariate, multivariate, and covariate-informed forecasting tasks in a zero-shot manner.<n>It delivers state-of-the-art performance across three comprehensive benchmarks: fev-bench, GIFT-Eval, and Chronos Benchmark II.<n>The in-context learning capabilities of Chronos-2 establish it as a general-purpose forecasting model that can be used "as is" in real-world forecasting pipelines.
arXiv Detail & Related papers (2025-10-17T17:00:53Z) - SeFEF: A Seizure Forecasting Evaluation Framework [0.0]
We introduce a Python-based framework aimed at streamlining the development, assessment, and documentation of seizure forecasting algorithms.<n>The framework automates data labeling, cross-validation splitting, forecast post-processing, performance evaluation, and reporting.<n>It supports various forecasting horizons and includes a model card that documents implementation details, training and evaluation settings, and performance metrics.
arXiv Detail & Related papers (2025-10-13T11:10:27Z) - The Lie of the Average: How Class Incremental Learning Evaluation Deceives You? [48.83567710215299]
Class Incremental Learning (CIL) requires models to continuously learn new classes without forgetting previously learned ones.<n>We argue that a robust CIL evaluation protocol should accurately characterize and estimate the entire performance distribution.<n>We propose EDGE, an evaluation protocol that adaptively identifies and samples extreme class sequences using inter-task similarity.
arXiv Detail & Related papers (2025-09-26T17:00:15Z) - Enhancing Transformer-Based Foundation Models for Time Series Forecasting via Bagging, Boosting and Statistical Ensembles [7.787518725874443]
Time series foundation models (TSFMs) have shown strong generalization and zero-shot capabilities for time series forecasting, anomaly detection, classification, and imputation.<n>This paper investigates a suite of statistical and ensemble-based enhancement techniques to improve robustness and accuracy.
arXiv Detail & Related papers (2025-08-18T04:06:26Z) - Confidence Intervals for Evaluation of Data Mining [3.8485822412233452]
We consider statistical inference about general performance measures used in data mining.<n>We study the finite sample coverage probabilities for confidence intervals.<n>We also propose a blurring correction' on the variance to improve the finite sample performance.
arXiv Detail & Related papers (2025-02-10T20:22:02Z) - FamiCom: Further Demystifying Prompts for Language Models with Task-Agnostic Performance Estimation [73.454943870226]
Language models have shown impressive in-context-learning capabilities.
We propose a measure called FamiCom, providing a more comprehensive measure for task-agnostic performance estimation.
arXiv Detail & Related papers (2024-06-17T06:14:55Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction [49.15931834209624]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.<n>We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.<n>By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - Towards More Fine-grained and Reliable NLP Performance Prediction [85.78131503006193]
We make two contributions to improving performance prediction for NLP tasks.
First, we examine performance predictors for holistic measures of accuracy like F1 or BLEU.
Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration.
arXiv Detail & Related papers (2021-02-10T15:23:20Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.