SEMF: Supervised Expectation-Maximization Framework for Predicting Intervals
- URL: http://arxiv.org/abs/2405.18176v4
- Date: Fri, 25 Apr 2025 20:49:18 GMT
- Title: SEMF: Supervised Expectation-Maximization Framework for Predicting Intervals
- Authors: Ilia Azizi, Marc-Olivier Boldi, Valérie Chavez-Demoulin,
- Abstract summary: Supervised Expectation-Maximization Framework (SEMF) is a versatile and model-agnostic approach for generating prediction intervals with any ML model.<n>SEMF consistently produces narrower prediction intervals while maintaining the desired coverage probability.<n>Without using the quantile (pinball) loss, SEMF allows point predictors, including gradient-boosted trees and neural networks, to be calibrated with conformal quantile regression.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work introduces the Supervised Expectation-Maximization Framework (SEMF), a versatile and model-agnostic approach for generating prediction intervals with any ML model. SEMF extends the Expectation-Maximization algorithm, traditionally used in unsupervised learning, to a supervised context, leveraging latent variable modeling for uncertainty estimation. Through extensive empirical evaluation of diverse simulated distributions and 11 real-world tabular datasets, SEMF consistently produces narrower prediction intervals while maintaining the desired coverage probability, outperforming traditional quantile regression methods. Furthermore, without using the quantile (pinball) loss, SEMF allows point predictors, including gradient-boosted trees and neural networks, to be calibrated with conformal quantile regression. The results indicate that SEMF enhances uncertainty quantification under diverse data distributions and is particularly effective for models that otherwise struggle with inherent uncertainty representation.
Related papers
- MVG-CRPS: A Robust Loss Function for Multivariate Probabilistic Forecasting [17.212396544233307]
We propose MVG-CRPS, a strictly proper scoring rule for MVG distributions.
MVG-CRPS admits a closed-form expression in terms of neural network outputs, thereby integrating seamlessly into deep learning frameworks.
arXiv Detail & Related papers (2024-10-11T15:10:20Z) - Ranking and Combining Latent Structured Predictive Scores without Labeled Data [2.5064967708371553]
This paper introduces a novel structured unsupervised ensemble learning model (SUEL)
It exploits the dependency between a set of predictors with continuous predictive scores, rank the predictors without labeled data and combine them to an ensembled score with weights.
The efficacy of the proposed methods is rigorously assessed through both simulation studies and real-world application of risk genes discovery.
arXiv Detail & Related papers (2024-08-14T20:14:42Z) - MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - MCDFN: Supply Chain Demand Forecasting via an Explainable Multi-Channel Data Fusion Network Model [0.0]
We introduce the Multi-Channel Data Fusion Network (MCDFN), a hybrid architecture that integrates CNN, Long Short-Term Memory networks (LSTM), and Gated Recurrent Units (GRU)
Our comparative benchmarking demonstrates that MCDFN outperforms seven other deep-learning models.
This research advances demand forecasting methodologies and offers practical guidelines for integrating MCDFN into supply chain systems.
arXiv Detail & Related papers (2024-05-24T14:30:00Z) - Adapting to Length Shift: FlexiLength Network for Trajectory Prediction [53.637837706712794]
Trajectory prediction plays an important role in various applications, including autonomous driving, robotics, and scene understanding.
Existing approaches mainly focus on developing compact neural networks to increase prediction precision on public datasets, typically employing a standardized input duration.
We introduce a general and effective framework, the FlexiLength Network (FLN), to enhance the robustness of existing trajectory prediction against varying observation periods.
arXiv Detail & Related papers (2024-03-31T17:18:57Z) - Posterior Uncertainty Quantification in Neural Networks using Data Augmentation [3.9860047080844807]
We show that deep ensembling is a fundamentally mis-specified model class, since it assumes that future data are supported on existing observations only.
We propose MixupMP, a method that constructs a more realistic predictive distribution using popular data augmentation techniques.
Our empirical analysis showcases that MixupMP achieves superior predictive performance and uncertainty quantification on various image classification datasets.
arXiv Detail & Related papers (2024-03-18T17:46:07Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - XRMDN: An Extended Recurrent Mixture Density Network for Short-Term
Probabilistic Rider Demand Forecasting with High Volatility [16.047461063459846]
We propose an Extended Recurrent Mixture Density Network (XRMDN) to process demand residuals and variance through correlated modules.
XRMDN adeptly captures demand trends, thus significantly enhancing forecasting precision, particularly in high-volatility scenarios.
arXiv Detail & Related papers (2023-10-15T14:18:42Z) - Boosted Control Functions [10.503777692702952]
This work aims to bridge the gap between causal effect estimation and prediction tasks.
We establish a novel connection between the field of distribution from machine learning, and simultaneous equation models and control function from econometrics.
Within this framework, we propose a strong notion of invariance for a predictive model and compare it with existing (weaker) versions.
arXiv Detail & Related papers (2023-10-09T15:43:46Z) - Ensemble Multi-Quantiles: Adaptively Flexible Distribution Prediction
for Uncertainty Quantification [4.728311759896569]
We propose a novel, succinct, and effective approach for distribution prediction to quantify uncertainty in machine learning.
It incorporates adaptively flexible distribution prediction of $mathbbP(mathbfy|mathbfX=x)$ in regression tasks.
On extensive regression tasks from UCI datasets, we show that EMQ achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-26T11:45:32Z) - Bayesian Sparse Regression for Mixed Multi-Responses with Application to
Runtime Metrics Prediction in Fog Manufacturing [6.288767115532775]
Fog manufacturing can greatly enhance traditional manufacturing systems through distributed computation Fog units.
It is known that the predictive offloading methods highly depend on accurate prediction and uncertainty quantification of runtime performance metrics.
We propose a Bayesian sparse regression for multivariate mixed responses to enhance the prediction of runtime performance metrics.
arXiv Detail & Related papers (2022-10-10T16:14:08Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2022-06-16T06:13:53Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - A Hierarchical Variational Neural Uncertainty Model for Stochastic Video
Prediction [45.6432265855424]
We introduce Neural Uncertainty Quantifier (NUQ) - a principled quantification of the model's predictive uncertainty.
Our proposed framework trains more effectively compared to the state-of-theart models.
arXiv Detail & Related papers (2021-10-06T00:25:22Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z) - An autoencoder wavelet based deep neural network with attention
mechanism for multistep prediction of plant growth [4.077787659104315]
This paper presents a novel approach for predicting plant growth in agriculture, focusing on prediction of plant Stem Diameter Variations (SDV)
Wavelet decomposition is applied to the original data, as to facilitate model fitting and reduce noise in them.
An encoder-decoder framework is developed using Long Short Term Memory (LSTM) and used for appropriate feature extraction from the data.
A recurrent neural network including LSTM and an attention mechanism is proposed for modelling long-term dependencies in the time series data.
arXiv Detail & Related papers (2020-12-07T20:30:39Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.