Low-Degree Multicalibration
- URL: http://arxiv.org/abs/2203.01255v1
- Date: Wed, 2 Mar 2022 17:24:55 GMT
- Title: Low-Degree Multicalibration
- Authors: Parikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao
- Abstract summary: Low-Degree Multicalibration defines a hierarchy of increasingly-powerful multi-group fairness notions.
We show that low-degree multicalibration can be significantly more efficient than full multicalibration.
Our work presents compelling evidence that low-degree multicalibration represents a sweet spot, pairing computational and sample efficiency with strong fairness and accuracy guarantees.
- Score: 16.99099840073075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introduced as a notion of algorithmic fairness, multicalibration has proved
to be a powerful and versatile concept with implications far beyond its
original intent. This stringent notion -- that predictions be well-calibrated
across a rich class of intersecting subpopulations -- provides its strong
guarantees at a cost: the computational and sample complexity of learning
multicalibrated predictors are high, and grow exponentially with the number of
class labels. In contrast, the relaxed notion of multiaccuracy can be achieved
more efficiently, yet many of the most desirable properties of multicalibration
cannot be guaranteed assuming multiaccuracy alone. This tension raises a key
question: Can we learn predictors with multicalibration-style guarantees at a
cost commensurate with multiaccuracy?
In this work, we define and initiate the study of Low-Degree
Multicalibration. Low-Degree Multicalibration defines a hierarchy of
increasingly-powerful multi-group fairness notions that spans multiaccuracy and
the original formulation of multicalibration at the extremes. Our main
technical contribution demonstrates that key properties of multicalibration,
related to fairness and accuracy, actually manifest as low-degree properties.
Importantly, we show that low-degree multicalibration can be significantly more
efficient than full multicalibration. In the multi-class setting, the sample
complexity to achieve low-degree multicalibration improves exponentially (in
the number of classes) over full multicalibration. Our work presents compelling
evidence that low-degree multicalibration represents a sweet spot, pairing
computational and sample efficiency with strong fairness and accuracy
guarantees.
Related papers
- Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE [15.003006630308517]
Speculative decoding (SD) accelerates large language model inference by using a smaller draft model to predict multiple tokens.
We propose Jakiro, leveraging Mixture of Experts (MoE), where independent experts generate diverse predictions.
Our method significantly boosts prediction accuracy and achieves higher inference speedups.
arXiv Detail & Related papers (2025-02-10T09:24:06Z) - Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and Applications [79.53938312089308]
The MIDX-Sampler is a novel adaptive sampling strategy based on an inverted multi-index approach.
Our method is backed by rigorous theoretical analysis, addressing key concerns such as sampling bias, gradient bias, convergence rates, and generalization error bounds.
arXiv Detail & Related papers (2025-01-15T04:09:21Z) - Dynamic Correlation Learning and Regularization for Multi-Label Confidence Calibration [60.95748658638956]
This paper introduces the Multi-Label Confidence task, aiming to provide well-calibrated confidence scores in multi-label scenarios.
Existing single-label calibration methods fail to account for category correlations, which are crucial for addressing semantic confusion.
We propose the Dynamic Correlation Learning and Regularization algorithm, which leverages multi-grained semantic correlations to better model semantic confusion.
arXiv Detail & Related papers (2024-07-09T13:26:21Z) - When is Multicalibration Post-Processing Necessary? [12.628103786954487]
Multicalibration is a property of predictors which guarantees meaningful uncertainty estimates.
We conduct the first comprehensive study evaluating the usefulness of multicalibration post-processing.
We distill many independent observations which may be useful for practical and effective applications of multicalibration post-processing.
arXiv Detail & Related papers (2024-06-10T17:26:39Z) - Calibrating Multimodal Learning [94.65232214643436]
We propose a novel regularization technique, i.e., Calibrating Multimodal Learning (CML) regularization, to calibrate the predictive confidence of previous methods.
This technique could be flexibly equipped by existing models and improve the performance in terms of confidence calibration, classification accuracy, and model robustness.
arXiv Detail & Related papers (2023-06-02T04:29:57Z) - Multi-Head Multi-Loss Model Calibration [13.841172927454204]
We introduce a form of simplified ensembling that bypasses the costly training and inference of deep ensembles.
Specifically, each head is trained to minimize a weighted Cross-Entropy loss, but the weights are different among the different branches.
We show that the resulting averaged predictions can achieve excellent calibration without sacrificing accuracy in two challenging datasets.
arXiv Detail & Related papers (2023-03-02T09:32:32Z) - A Unifying Perspective on Multi-Calibration: Game Dynamics for
Multi-Objective Learning [63.20009081099896]
We provide a unifying framework for the design and analysis of multicalibrated predictors.
We exploit connections to game dynamics to achieve state-of-the-art guarantees for a diverse set of multicalibration learning problems.
arXiv Detail & Related papers (2023-02-21T18:24:17Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - Sample Complexity of Uniform Convergence for Multicalibration [43.10452387619829]
We address the multicalibration error and decouple it from the prediction error.
Our work gives sample complexity bounds for uniform convergence guarantees of multicalibration error.
arXiv Detail & Related papers (2020-05-04T18:01:38Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.