CLAPS: Posterior-Aware Conformal Intervals via Last-Layer Laplace
- URL: http://arxiv.org/abs/2512.01384v2
- Date: Tue, 09 Dec 2025 14:56:02 GMT
- Title: CLAPS: Posterior-Aware Conformal Intervals via Last-Layer Laplace
- Authors: Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh,
- Abstract summary: We present CLAPS, a posterior-aware conformal regression method that pairs a Last-Layer Laplace Approximation with split-conformal calibration.<n>From the resulting Gaussian posterior, CLAPS defines a simple two-sided posterior CDF score that aligns the conformity metric with the full shape, not just a point estimate.<n>This alignment yields narrower prediction intervals at the same target coverage, especially on small to medium datasets where data are scarce and uncertainty modeling matters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present CLAPS, a posterior-aware conformal regression method that pairs a Last-Layer Laplace Approximation with split-conformal calibration. From the resulting Gaussian posterior, CLAPS defines a simple two-sided posterior CDF score that aligns the conformity metric with the full predictive shape, not just a point estimate. This alignment yields narrower prediction intervals at the same target coverage, especially on small to medium tabular datasets where data are scarce and uncertainty modeling matters. We also provide a lightweight diagnostic suite that separates aleatoric and epistemic components and visualizes posterior behavior, helping practitioners understand why intervals shrink when they do. Across multiple benchmarks using the same MLP backbone, CLAPS consistently attains nominal coverage with improved efficiency and minimal overhead, offering a clear, practical upgrade to residual-based conformal baselines.
Related papers
- Co-optimization for Adaptive Conformal Prediction [9.881784717196675]
We propose a framework that learns prediction intervals by jointly optimizing a center $m(x)$ and a radius $h(x)$.<n>Experiments on synthetic and real benchmarks demonstrate that CoCP yields consistently shorter intervals and achieves state-of-the-art conditional-coverage diagnostics.
arXiv Detail & Related papers (2026-03-02T10:43:19Z) - Learnable Chernoff Baselines for Inference-Time Alignment [64.81256817158851]
We introduce Learnable Chernoff Baselines as a method for efficiently and approximately sampling from exponentially tilted kernels.<n>We establish total-variation guarantees to the ideal aligned model, and demonstrate in both continuous and discrete diffusion settings that LCB sampling closely matches ideal rejection sampling.
arXiv Detail & Related papers (2026-02-08T00:09:40Z) - Fast Conformal Prediction using Conditional Interquantile Intervals [9.881784717196675]
We introduce Conformal Interquantile Regression (CIR), a conformal regression method that constructs near-minimal prediction intervals with guaranteed coverage.<n>We also propose CIR+, which enhances CIR by incorporating a width-based selection rule for interquantile intervals.
arXiv Detail & Related papers (2026-01-06T07:01:08Z) - Concept Regions Matter: Benchmarking CLIP with a New Cluster-Importance Approach [20.898059440239603]
Cluster-based Concept Importance (CCI) is a novel interpretability method.<n>CCI sets a new state of the art on faithfulness benchmarks.<n>We present a comprehensive evaluation of eighteen CLIP variants.
arXiv Detail & Related papers (2025-11-17T05:01:24Z) - Overlap-Adaptive Regularization for Conditional Average Treatment Effect Estimation [59.153491256972806]
State-of-the-art methods for CATE estimation often perform poorly in the presence of low overlap.<n>We introduce Overlap-Adaptive Regularization (OAR) that regularizes target models proportionally to overlap weights.<n>Our OAR significantly improves CATE estimation in low-overlap settings in comparison to constant regularization.
arXiv Detail & Related papers (2025-09-29T15:56:24Z) - Post-Hoc Split-Point Self-Consistency Verification for Efficient, Unified Quantification of Aleatoric and Epistemic Uncertainty in Deep Learning [5.996056764788456]
Uncertainty quantification (UQ) is vital for trustworthy deep learning, yet existing methods are either computationally intensive or provide only partial, task-specific estimates.<n>We propose a post-hoc single-forward-pass framework that jointly captures aleatoric and epistemic uncertainty without modifying or retraining pretrained models.<n>Our method applies emphSplit-Point Analysis (SPA) to decompose predictive residuals into upper and lower subsets, computing emphMean Absolute Residuals (MARs) on each side.
arXiv Detail & Related papers (2025-09-16T17:16:01Z) - Approximating Full Conformal Prediction for Neural Network Regression with Gauss-Newton Influence [8.952347049759094]
We construct prediction intervals for neural network regressors post-hoc without held-out data.<n>We train just once and locally perturb model parameters using Gauss-Newton influence.
arXiv Detail & Related papers (2025-07-27T13:34:32Z) - On Volume Minimization in Conformal Regression [8.673942897414934]
We study the question of volume optimality in split conformal regression.<n>We first derive a finite-sample upper-bound on the excess volume loss of the interval returned by the classical split method.<n>We introduce EffOrt, a methodology that modifies the learning step so that the base prediction function is selected in order to minimize the length of the returned intervals.
arXiv Detail & Related papers (2025-02-14T08:14:22Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from
KKT Conditions for Margin Maximization [59.038366742773164]
Linears and leaky ReLU trained by gradient flow on logistic loss have an implicit bias towards satisfying the Karush-KuTucker (KKT) conditions.
In this work we establish a number of settings where the satisfaction of these conditions implies benign overfitting in linear classifiers and in two-layer leaky ReLU networks.
arXiv Detail & Related papers (2023-03-02T18:24:26Z) - Posterior Coreset Construction with Kernelized Stein Discrepancy for
Model-Based Reinforcement Learning [78.30395044401321]
We develop a novel model-based approach to reinforcement learning (MBRL)
It relaxes the assumptions on the target transition model to belong to a generic family of mixture models.
It can achieve up-to 50 percent reduction in wall clock time in some continuous control environments.
arXiv Detail & Related papers (2022-06-02T17:27:49Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.