Efficient Swap Multicalibration of Elicitable Properties
- URL: http://arxiv.org/abs/2511.04907v1
- Date: Fri, 07 Nov 2025 01:14:39 GMT
- Title: Efficient Swap Multicalibration of Elicitable Properties
- Authors: Lunjia Hu, Haipeng Luo, Spandan Senapati, Vatsal Sharan,
- Abstract summary: [NR23] established a surprising connection between multicalibration for an arbitrary property $Gamma$ and property elicitation.<n>In this paper, we generalize multicalibration for an elicitable property $Gamma$ from group membership functions to arbitrary bounded hypothesis classes.
- Score: 41.64272548564929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multicalibration [HJKRR18] is an algorithmic fairness perspective that demands that the predictions of a predictor are correct conditional on themselves and membership in a collection of potentially overlapping subgroups of a population. The work of [NR23] established a surprising connection between multicalibration for an arbitrary property $\Gamma$ (e.g., mean or median) and property elicitation: a property $\Gamma$ can be multicalibrated if and only if it is elicitable, where elicitability is the notion that the true property value of a distribution can be obtained by solving a regression problem over the distribution. In the online setting, [NR23] proposed an inefficient algorithm that achieves $\sqrt T$ $\ell_2$-multicalibration error for a hypothesis class of group membership functions and an elicitable property $\Gamma$, after $T$ rounds of interaction between a forecaster and adversary. In this paper, we generalize multicalibration for an elicitable property $\Gamma$ from group membership functions to arbitrary bounded hypothesis classes and introduce a stronger notion -- swap multicalibration, following [GKR23]. Subsequently, we propose an oracle-efficient algorithm which, when given access to an online agnostic learner, achieves $T^{1/(r+1)}$ $\ell_r$-swap multicalibration error with high probability (for $r\ge2$) for a hypothesis class with bounded sequential Rademacher complexity and an elicitable property $\Gamma$. For the special case of $r=2$, this implies an oracle-efficient algorithm that achieves $T^{1/3}$ $\ell_2$-swap multicalibration error, which significantly improves on the previously established bounds for the problem [NR23, GMS25, LSS25a], and completely resolves an open question raised in [GJRR24] on the possibility of an oracle-efficient algorithm that achieves $\sqrt{T}$ $\ell_2$-mean multicalibration error by answering it in a strongly affirmative sense.
Related papers
- Towards Fundamental Limits for Active Multi-distribution Learning [16.639855803241524]
We develop new algorithms for active multi-distribution learning and establish improved label complexity upper and lower bounds.<n>We show that the bound in the realizable setting is information-theoretically optimal and that the $knu/varepsilon2$ term in the setting is fundamental for proper learners.
arXiv Detail & Related papers (2025-06-21T06:08:58Z) - Improved Bounds for Swap Multicalibration and Swap Omniprediction [31.959887895880765]
We show that it is possible to efficiently achieve $O(sqrtT)$ $ell_2$-swap multicalibration error against bounded linear functions.<n>We also establish a $O(varepsilon -3)$ sample complexity of efficiently learning an $varepsilon$-swap omnipredictor for the class of convex and Lipschitz functions.
arXiv Detail & Related papers (2025-05-27T08:29:35Z) - Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias [21.64772960240025]
We show that as the dimension $d$ of the problem increases, the number of iterations required to ensure convergence within a desired error increases.<n>A key technical challenge we address is the lack of a one-step contraction property in the $W_2,ellinfty$ metric to measure convergence.
arXiv Detail & Related papers (2024-08-20T01:24:54Z) - Oblivious Stochastic Composite Optimization [47.48197617884748]
We show that our algorithms converge without any prior knowledge on the parameters of the problem.<n>All three algorithms work without prior knowledge of the diameter of the feasible set, the Lipschitz constant or smoothness of the objective function.<n>We extend our framework to relative scale and demonstrate the efficiency and robustness of our methods on large-scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Contextual Combinatorial Bandits with Probabilistically Triggered Arms [55.9237004478033]
We study contextual bandits with probabilistically triggered arms (C$2$MAB-T) under a variety of smoothness conditions.
Under the triggering modulated (TPM) condition, we devise the C$2$-UC-T algorithm and derive a regret bound $tildeO(dsqrtT)$.
arXiv Detail & Related papers (2023-03-30T02:51:00Z) - Variance-aware robust reinforcement learning with linear function
approximation with heavy-tailed rewards [6.932056534450556]
We present two algorithms, AdaOFUL and VARA, for online sequential decision-making in the presence of heavy-tailed rewards.
AdaOFUL achieves a state-of-the-art regret bound of $widetildemathcalObig.
VarA achieves a tighter variance-aware regret bound of $widetildemathcalO(dsqrtHmathcalG*K)$.
arXiv Detail & Related papers (2023-03-09T22:16:28Z) - The Scope of Multicalibration: Characterizing Multicalibration via
Property Elicitation [12.197909808303411]
We show that it is possible to produce a multicalibrated predictor for a continuous scalar distributional property $Gamma$ if and only if $Gamma$ is elicitable.
On the negative side, we show that for non-elicitable continuous properties there exist simple data distributions on which even the true distributional predictor is not calibrated.
arXiv Detail & Related papers (2023-02-16T18:59:16Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Fuzzy Clustering with Similarity Queries [56.96625809888241]
The fuzzy or soft objective is a popular generalization of the well-known $k$-means problem.
We show that by making few queries, the problem becomes easier to solve.
arXiv Detail & Related papers (2021-06-04T02:32:26Z) - Conditional Uncorrelation and Efficient Non-approximate Subset Selection
in Sparse Regression [72.84177488527398]
We consider sparse regression from the view of correlation, and propose the formula of conditional uncorrelation.
By the proposed method, the computational complexity is reduced from $O(frac16k3+mk2+mkd)$ to $O(frac16k3+frac12mk2)$ for each candidate subset in sparse regression.
arXiv Detail & Related papers (2020-09-08T20:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.