On Volume Minimization in Conformal Regression
- URL: http://arxiv.org/abs/2502.09985v1
- Date: Fri, 14 Feb 2025 08:14:22 GMT
- Title: On Volume Minimization in Conformal Regression
- Authors: Batiste Le Bars, Pierre Humbert,
- Abstract summary: We study the question of volume optimality in split conformal regression.<n>We first derive a finite-sample upper-bound on the excess volume loss of the interval returned by the classical split method.<n>We introduce EffOrt, a methodology that modifies the learning step so that the base prediction function is selected in order to minimize the length of the returned intervals.
- Score: 8.673942897414934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the question of volume optimality in split conformal regression, a topic still poorly understood in comparison to coverage control. Using the fact that the calibration step can be seen as an empirical volume minimization problem, we first derive a finite-sample upper-bound on the excess volume loss of the interval returned by the classical split method. This important quantity measures the difference in length between the interval obtained with the split method and the shortest oracle prediction interval. Then, we introduce EffOrt, a methodology that modifies the learning step so that the base prediction function is selected in order to minimize the length of the returned intervals. In particular, our theoretical analysis of the excess volume loss of the prediction sets produced by EffOrt reveals the links between the learning and calibration steps, and notably the impact of the choice of the function class of the base predictor. We also introduce Ad-EffOrt, an extension of the previous method, which produces intervals whose size adapts to the value of the covariate. Finally, we evaluate the empirical performance and the robustness of our methodologies.
Related papers
- CLAPS: Posterior-Aware Conformal Intervals via Last-Layer Laplace [0.0]
We present CLAPS, a posterior-aware conformal regression method that pairs a Last-Layer Laplace Approximation with split-conformal calibration.<n>From the resulting Gaussian posterior, CLAPS defines a simple two-sided posterior CDF score that aligns the conformity metric with the full shape, not just a point estimate.<n>This alignment yields narrower prediction intervals at the same target coverage, especially on small to medium datasets where data are scarce and uncertainty modeling matters.
arXiv Detail & Related papers (2025-12-01T07:58:21Z) - In-Context Learning Is Provably Bayesian Inference: A Generalization Theory for Meta-Learning [51.56484100374058]
We introduce a principled risk decomposition that separates the total ICL risk into two components: Bayes Gap and Posterior Variance.<n>For a uniform-attention Transformer, we derive a non-asymptotic upper bound on this gap, which explicitly clarifies the dependence on the number of pretraining prompts.<n>The Posterior Variance is a model-independent risk representing the intrinsic task uncertainty.
arXiv Detail & Related papers (2025-10-13T03:42:31Z) - Error Propagation in Dynamic Programming: From Stochastic Control to Option Pricing [0.12891210250935145]
This paper investigates theoretical and methodological foundations for optimal control (SOC) in discrete time.<n>We start formulating the control problem in a general dynamic programming framework, introducing the mathematical structure needed for a detailed convergence analysis.<n>We illustrate how our analysis naturally applies to a key financial application: the pricing of American options.
arXiv Detail & Related papers (2025-09-24T15:30:19Z) - Discretization-free Multicalibration through Loss Minimization over Tree Ensembles [22.276913140687725]
We propose a discretization-free multicalibration method over an ensemble of depth-two decision trees.<n>Our algorithm provably achieves multicalibration, provided that the data distribution satisfies a technical condition we term as loss saturation.
arXiv Detail & Related papers (2025-05-23T03:29:58Z) - Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks [23.33263252557512]
We address the problem of variance reduction in gradient-based meta-learning.
We propose a novel approach that reduces the variance of the gradient estimate by weighing each support point individually.
arXiv Detail & Related papers (2024-10-02T12:30:05Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Semiparametric Efficient Inference in Adaptive Experiments [29.43493007296859]
We consider the problem of efficient inference of the Average Treatment Effect in a sequential experiment where the policy governing the assignment of subjects to treatment or control can change over time.
We first provide a central limit theorem for the Adaptive Augmented Inverse-Probability Weighted estimator, which is semi efficient, under weaker assumptions than those previously made in the literature.
We then consider sequential inference setting, deriving both propensity and nonasymptotic confidence sequences that are considerably tighter than previous methods.
arXiv Detail & Related papers (2023-11-30T06:25:06Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Causal Inference with Treatment Measurement Error: A Nonparametric
Instrumental Variable Approach [24.52459180982653]
We propose a kernel-based nonparametric estimator for the causal effect when the cause is corrupted by error.
We empirically show that our proposed method, MEKIV, improves over baselines and is robust under changes in the strength of measurement error.
arXiv Detail & Related papers (2022-06-18T11:47:25Z) - Towards Data-Algorithm Dependent Generalization: a Case Study on
Overparameterized Linear Regression [19.047997113063147]
We introduce a notion called data-algorithm compatibility, which considers the generalization behavior of the entire data-dependent training trajectory.
We perform a data-dependent trajectory analysis and derive a sufficient condition for compatibility in such a setting.
arXiv Detail & Related papers (2022-02-12T12:42:36Z) - Conservative Distributional Reinforcement Learning with Safety
Constraints [22.49025480735792]
Safety exploration can be regarded as a constrained Markov decision problem where the expected long-term cost is constrained.
Previous off-policy algorithms convert the constrained optimization problem into the corresponding unconstrained dual problem by introducing the Lagrangian relaxation technique.
We present a novel off-policy reinforcement learning algorithm called Conservative Distributional Maximum a Posteriori Policy Optimization.
arXiv Detail & Related papers (2022-01-18T19:45:43Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Learning Prediction Intervals for Regression: Generalization and
Calibration [12.576284277353606]
We study the generation of prediction intervals in regression for uncertainty quantification.
We use a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes.
We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
arXiv Detail & Related papers (2021-02-26T17:55:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.