PAC-Bayesian Generalization Guarantees for Fairness on Stochastic and Deterministic Classifiers
- URL: http://arxiv.org/abs/2602.11722v1
- Date: Thu, 12 Feb 2026 08:49:34 GMT
- Title: PAC-Bayesian Generalization Guarantees for Fairness on Stochastic and Deterministic Classifiers
- Authors: Julien Bastian, Benjamin Leblanc, Pascal Germain, Amaury Habrard, Christine Largeron, Guillaume Metzler, Emilie Morvant, Paul Viallard,
- Abstract summary: We propose a PAC-Bayesian framework for deriving generalization bounds for fairness.<n>Our framework has two advantages: (i) It applies to a broad class of fairness measures that can be expressed as a risk discrepancy, and (ii) it leads to a self-bounding algorithm.
- Score: 8.438034474012044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classical PAC generalization bounds on the prediction risk of a classifier are insufficient to provide theoretical guarantees on fairness when the goal is to learn models balancing predictive risk and fairness constraints. We propose a PAC-Bayesian framework for deriving generalization bounds for fairness, covering both stochastic and deterministic classifiers. For stochastic classifiers, we derive a fairness bound using standard PAC-Bayes techniques. Whereas for deterministic classifiers, as usual PAC-Bayes arguments do not apply directly, we leverage a recent advance in PAC-Bayes to extend the fairness bound beyond the stochastic setting. Our framework has two advantages: (i) It applies to a broad class of fairness measures that can be expressed as a risk discrepancy, and (ii) it leads to a self-bounding algorithm in which the learning procedure directly optimizes a trade-off between generalization bounds on the prediction risk and on the fairness. We empirically evaluate our framework with three classical fairness measures, demonstrating not only its usefulness but also the tightness of our bounds.
Related papers
- Conformal Bandits: Bringing statistical validity and reward efficiency to the small-gap regime [0.39082875522676397]
We introduce Conformal Bandits, a novel framework integrating Conformal Prediction into bandit problems.<n>We bridge the regret-minimising potential of a decision-making bandit policy with statistical guarantees in the form of finite-time prediction coverage.<n>Motivated by this, we showcase our framework's practical advantage in terms of regret in small-gap settings.
arXiv Detail & Related papers (2025-12-10T17:34:55Z) - A Framework for Bounding Deterministic Risk with PAC-Bayes: Applications to Majority Votes [4.664367264604233]
PAC-Bayes is a popular framework for obtaining generalization guarantees in uncountable hypothesis spaces.<n>We propose a unified framework to extract guarantees holding for a single hypothesis from PAC-Bayesian guarantees.
arXiv Detail & Related papers (2025-10-29T14:38:35Z) - Set to Be Fair: Demographic Parity Constraints for Set-Valued Classification [5.085064777896467]
We address the problem of set-valued classification under demographic parity and expected size constraints.<n>We propose two complementary strategies: an oracle-based method that minimizes classification risk while satisfying both constraints, and a computationally efficient proxy that prioritizes constraint satisfaction.
arXiv Detail & Related papers (2025-10-06T15:36:45Z) - Optimal Conformal Prediction under Epistemic Uncertainty [61.46247583794497]
Conformal prediction (CP) is a popular framework for representing uncertainty.<n>We introduce Bernoulli prediction sets (BPS) which produce the smallest prediction sets that ensure conditional coverage.<n>When given first-order predictions, BPS reduces to the well-known adaptive prediction sets (APS)
arXiv Detail & Related papers (2025-05-25T08:32:44Z) - A Generic Framework for Conformal Fairness [19.694445748424346]
We formalize textitConformal Fairness, a notion of fairness using conformal predictors.<n>We provide a theoretically well-founded algorithm and associated framework to control for the gaps in coverage between different sensitive groups.
arXiv Detail & Related papers (2025-05-22T01:41:12Z) - Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Auditing Predictive Models for Intersectional Biases [1.9346186297861747]
Conditional Bias Scan (CBS) is a flexible auditing framework for detecting intersectional biases in classification models.
CBS identifies the subgroup for which there is the most significant bias against the protected class, as compared to the equivalent subgroup in the non-protected class.
We show that this methodology can detect previously unidentified intersectional and contextual biases in the COMPAS pre-trial risk assessment tool.
arXiv Detail & Related papers (2023-06-22T17:32:12Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Selective Classification via One-Sided Prediction [54.05407231648068]
One-sided prediction (OSP) based relaxation yields an SC scheme that attains near-optimal coverage in the practically relevant high target accuracy regime.
We theoretically derive bounds generalization for SC and OSP, and empirically we show that our scheme strongly outperforms state of the art methods in coverage at small error levels.
arXiv Detail & Related papers (2020-10-15T16:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.