Dimension-Free Decision Calibration for Nonlinear Loss Functions
- URL: http://arxiv.org/abs/2504.15615v1
- Date: Tue, 22 Apr 2025 06:14:23 GMT
- Title: Dimension-Free Decision Calibration for Nonlinear Loss Functions
- Authors: Jingwu Tang, Jiayun Wu, Zhiwei Steven Wu, Jiahao Zhang,
- Abstract summary: calibration for high-dimensional prediction outcome spaces requires exponential computational and statistical complexity.<n>We introduce algorithms that, given $mathrmpoly(|A|,1/epsilon)$ samples, can efficiently post-process it to satisfy decision calibration without worsening accuracy.
- Score: 27.879384242436835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When model predictions inform downstream decision making, a natural question is under what conditions can the decision-makers simply respond to the predictions as if they were the true outcomes. Calibration suffices to guarantee that simple best-response to predictions is optimal. However, calibration for high-dimensional prediction outcome spaces requires exponential computational and statistical complexity. The recent relaxation known as decision calibration ensures the optimality of the simple best-response rule while requiring only polynomial sample complexity in the dimension of outcomes. However, known results on calibration and decision calibration crucially rely on linear loss functions for establishing best-response optimality. A natural approach to handle nonlinear losses is to map outcomes $y$ into a feature space $\phi(y)$ of dimension $m$, then approximate losses with linear functions of $\phi(y)$. Unfortunately, even simple classes of nonlinear functions can demand exponentially large or infinite feature dimensions $m$. A key open problem is whether it is possible to achieve decision calibration with sample complexity independent of~$m$. We begin with a negative result: even verifying decision calibration under standard deterministic best response inherently requires sample complexity polynomial in~$m$. Motivated by this lower bound, we investigate a smooth version of decision calibration in which decision-makers follow a smooth best-response. This smooth relaxation enables dimension-free decision calibration algorithms. We introduce algorithms that, given $\mathrm{poly}(|A|,1/\epsilon)$ samples and any initial predictor~$p$, can efficiently post-process it to satisfy decision calibration without worsening accuracy. Our algorithms apply broadly to function classes that can be well-approximated by bounded-norm functions in (possibly infinite-dimensional) separable RKHS.
Related papers
- Efficient Calibration for Decision Making [26.81026842833163]
Hu and Wu (FOCS'24) use this to define an approximate calibration measure called calibration decision loss ($mathsfCDL$)<n>We develop a comprehensive theory of when $mathsfCDL_K$ is information-theoretically and computationally tractable.
arXiv Detail & Related papers (2025-11-17T18:52:00Z) - Robust Decision Making with Partially Calibrated Forecasts [32.96842403002962]
We study how a conservative decision maker should map predictions endowed with weaker (partial'') calibration guarantees to actions.<n>We characterize their minimax optimal decision rule via a duality argument, and show that surprisingly, trusting the predictions and acting accordingly'' is recovered in this minimax sense.
arXiv Detail & Related papers (2025-10-27T16:09:07Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Deterministic Nonsmooth Nonconvex Optimization [94.01526844386977]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.
Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision
Processes [80.89852729380425]
We propose the first computationally efficient algorithm that achieves the nearly minimax optimal regret $tilde O(dsqrtH3K)$.
Our work provides a complete answer to optimal RL with linear MDPs, and the developed algorithm and theoretical tools may be of independent interest.
arXiv Detail & Related papers (2022-12-12T18:58:59Z) - Calibrating Predictions to Decisions: A Novel Approach to Multi-Class
Calibration [118.26862029820447]
We introduce a new notion -- emphdecision calibration -- that requires the predicted distribution and true distribution to be indistinguishable'' to a set of downstream decision-makers.
Decision calibration improves decision-making on skin lesions and ImageNet classification with modern neural network.
arXiv Detail & Related papers (2021-07-12T20:17:28Z) - Adiabatic Quantum Feature Selection for Sparse Linear Regression [0.17499351967216337]
We formulate and compare the quality of QUBO solution on synthetic and real world datasets.
The results demonstrate the effectiveness of the proposed adiabatic quantum computing approach in finding the optimal solution.
arXiv Detail & Related papers (2021-06-04T09:14:01Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Extracting Optimal Solution Manifolds using Constrained Neural
Optimization [6.800113407368289]
Constrained Optimization solution algorithms are restricted to point based solutions.
We present an approach for extracting optimal sets as approximate, where unmodified non-informed constraints are defined.
arXiv Detail & Related papers (2020-09-13T15:37:44Z) - Tightly Robust Optimization via Empirical Domain Reduction [22.63829081634384]
We propose an algorithm to determine the scale such that the solution has a good objective value.
Under some regularity conditions, the scale obtained by our algorithm is $O(sqrtn)$, whereas the scale obtained by a standard approach is $O(sqrtd/n)$.
arXiv Detail & Related papers (2020-02-29T12:24:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.