Low-Degree Multicalibration
- URL: http://arxiv.org/abs/2203.01255v1
- Date: Wed, 2 Mar 2022 17:24:55 GMT
- Title: Low-Degree Multicalibration
- Authors: Parikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao
- Abstract summary: Low-Degree Multicalibration defines a hierarchy of increasingly-powerful multi-group fairness notions.
We show that low-degree multicalibration can be significantly more efficient than full multicalibration.
Our work presents compelling evidence that low-degree multicalibration represents a sweet spot, pairing computational and sample efficiency with strong fairness and accuracy guarantees.
- Score: 16.99099840073075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introduced as a notion of algorithmic fairness, multicalibration has proved
to be a powerful and versatile concept with implications far beyond its
original intent. This stringent notion -- that predictions be well-calibrated
across a rich class of intersecting subpopulations -- provides its strong
guarantees at a cost: the computational and sample complexity of learning
multicalibrated predictors are high, and grow exponentially with the number of
class labels. In contrast, the relaxed notion of multiaccuracy can be achieved
more efficiently, yet many of the most desirable properties of multicalibration
cannot be guaranteed assuming multiaccuracy alone. This tension raises a key
question: Can we learn predictors with multicalibration-style guarantees at a
cost commensurate with multiaccuracy?
In this work, we define and initiate the study of Low-Degree
Multicalibration. Low-Degree Multicalibration defines a hierarchy of
increasingly-powerful multi-group fairness notions that spans multiaccuracy and
the original formulation of multicalibration at the extremes. Our main
technical contribution demonstrates that key properties of multicalibration,
related to fairness and accuracy, actually manifest as low-degree properties.
Importantly, we show that low-degree multicalibration can be significantly more
efficient than full multicalibration. In the multi-class setting, the sample
complexity to achieve low-degree multicalibration improves exponentially (in
the number of classes) over full multicalibration. Our work presents compelling
evidence that low-degree multicalibration represents a sweet spot, pairing
computational and sample efficiency with strong fairness and accuracy
guarantees.
Related papers
- Dynamic Correlation Learning and Regularization for Multi-Label Confidence Calibration [60.95748658638956]
This paper introduces the Multi-Label Confidence task, aiming to provide well-calibrated confidence scores in multi-label scenarios.
Existing single-label calibration methods fail to account for category correlations, which are crucial for addressing semantic confusion.
We propose the Dynamic Correlation Learning and Regularization algorithm, which leverages multi-grained semantic correlations to better model semantic confusion.
arXiv Detail & Related papers (2024-07-09T13:26:21Z) - When is Multicalibration Post-Processing Necessary? [12.628103786954487]
Multicalibration is a property of predictors which guarantees meaningful uncertainty estimates.
We conduct the first comprehensive study evaluating the usefulness of multicalibration post-processing.
We distill many independent observations which may be useful for practical and effective applications of multicalibration post-processing.
arXiv Detail & Related papers (2024-06-10T17:26:39Z) - On Computationally Efficient Multi-Class Calibration [9.032290717007065]
Project calibration gives strong guarantees for all downstream decision makers.
It ensures that the probabilities predicted by summing the probabilities assigned to labels in $T$ are close to some perfectly calibrated binary predictor.
arXiv Detail & Related papers (2024-02-12T17:25:23Z) - Calibrating Multimodal Learning [94.65232214643436]
We propose a novel regularization technique, i.e., Calibrating Multimodal Learning (CML) regularization, to calibrate the predictive confidence of previous methods.
This technique could be flexibly equipped by existing models and improve the performance in terms of confidence calibration, classification accuracy, and model robustness.
arXiv Detail & Related papers (2023-06-02T04:29:57Z) - Certifying Ensembles: A General Certification Theory with
S-Lipschitzness [128.2881318211724]
Ensembling has shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift.
In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles.
arXiv Detail & Related papers (2023-04-25T17:50:45Z) - Multi-Head Multi-Loss Model Calibration [13.841172927454204]
We introduce a form of simplified ensembling that bypasses the costly training and inference of deep ensembles.
Specifically, each head is trained to minimize a weighted Cross-Entropy loss, but the weights are different among the different branches.
We show that the resulting averaged predictions can achieve excellent calibration without sacrificing accuracy in two challenging datasets.
arXiv Detail & Related papers (2023-03-02T09:32:32Z) - A Unifying Perspective on Multi-Calibration: Game Dynamics for
Multi-Objective Learning [63.20009081099896]
We provide a unifying framework for the design and analysis of multicalibrated predictors.
We exploit connections to game dynamics to achieve state-of-the-art guarantees for a diverse set of multicalibration learning problems.
arXiv Detail & Related papers (2023-02-21T18:24:17Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - Sample Complexity of Uniform Convergence for Multicalibration [43.10452387619829]
We address the multicalibration error and decouple it from the prediction error.
Our work gives sample complexity bounds for uniform convergence guarantees of multicalibration error.
arXiv Detail & Related papers (2020-05-04T18:01:38Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.