Certifying Fairness of Probabilistic Circuits
- URL: http://arxiv.org/abs/2212.02474v1
- Date: Mon, 5 Dec 2022 18:36:45 GMT
- Title: Certifying Fairness of Probabilistic Circuits
- Authors: Nikil Roashan Selvam, Guy Van den Broeck, YooJung Choi
- Abstract summary: We propose an algorithm to search for discrimination patterns in a general class of probabilistic models, namely probabilistic circuits.
We also introduce new classes of patterns such as minimal, maximal, and optimal patterns that can effectively summarize exponentially many discrimination patterns.
- Score: 33.1089249944851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increased use of machine learning systems for decision making,
questions about the fairness properties of such systems start to take center
stage. Most existing work on algorithmic fairness assume complete observation
of features at prediction time, as is the case for popular notions like
statistical parity and equal opportunity. However, this is not sufficient for
models that can make predictions with partial observation as we could miss
patterns of bias and incorrectly certify a model to be fair. To address this, a
recently introduced notion of fairness asks whether the model exhibits any
discrimination pattern, in which an individual characterized by (partial)
feature observations, receives vastly different decisions merely by disclosing
one or more sensitive attributes such as gender and race. By explicitly
accounting for partial observations, this provides a much more fine-grained
notion of fairness.
In this paper, we propose an algorithm to search for discrimination patterns
in a general class of probabilistic models, namely probabilistic circuits.
Previously, such algorithms were limited to naive Bayes classifiers which make
strong independence assumptions; by contrast, probabilistic circuits provide a
unifying framework for a wide range of tractable probabilistic models and can
even be compiled from certain classes of Bayesian networks and probabilistic
programs, making our method much more broadly applicable. Furthermore, for an
unfair model, it may be useful to quickly find discrimination patterns and
distill them for better interpretability. As such, we also propose a
sampling-based approach to more efficiently mine discrimination patterns, and
introduce new classes of patterns such as minimal, maximal, and Pareto optimal
patterns that can effectively summarize exponentially many discrimination
patterns
Related papers
- Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - Masked prediction tasks: a parameter identifiability view [49.533046139235466]
We focus on the widely used self-supervised learning method of predicting masked tokens.
We show that there is a rich landscape of possibilities, out of which some prediction tasks yield identifiability, while others do not.
arXiv Detail & Related papers (2022-02-18T17:09:32Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z) - Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm [33.145522561104464]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
arXiv Detail & Related papers (2020-09-09T17:40:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.