Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the Rashomon Set
- URL: http://arxiv.org/abs/2501.15634v1
- Date: Sun, 26 Jan 2025 18:39:54 GMT
- Title: Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the Rashomon Set
- Authors: Gordon Dai, Pavan Ravishankar, Rachel Yuan, Daniel B. Neill, Emily Black,
- Abstract summary: We study the properties of sets of equally accurate models, or Rashomon sets, in general.
Our contributions include methods for efficiently sampling models from this set.
We also derive the probability that an individual's prediction will be flipped within the Rashomon set.
- Score: 9.440660971920648
- License:
- Abstract: When selecting a model from a set of equally performant models, how much unfairness can you really reduce? Is it important to be intentional about fairness when choosing among this set, or is arbitrarily choosing among the set of ''good'' models good enough? Recent work has highlighted that the phenomenon of model multiplicity-where multiple models with nearly identical predictive accuracy exist for the same task-has both positive and negative implications for fairness, from strengthening the enforcement of civil rights law in AI systems to showcasing arbitrariness in AI decision-making. Despite the enormous implications of model multiplicity, there is little work that explores the properties of sets of equally accurate models, or Rashomon sets, in general. In this paper, we present five main theoretical and methodological contributions which help us to understand the relatively unexplored properties of the Rashomon set, in particular with regards to fairness. Our contributions include methods for efficiently sampling models from this set and techniques for identifying the fairest models according to key fairness metrics such as statistical parity. We also derive the probability that an individual's prediction will be flipped within the Rashomon set, as well as expressions for the set's size and the distribution of error tolerance used across models. These results lead to policy-relevant takeaways, such as the importance of intentionally looking for fair models within the Rashomon set, and understanding which individuals or groups may be more susceptible to arbitrary decisions.
Related papers
- Fairness and Sparsity within Rashomon sets: Enumeration-Free Exploration and Characterization [4.554831326324025]
We introduce an enumeration-free method based on mathematical programming to characterize various properties such as fairness or sparsity.
We apply our approach to two hypothesis classes: scoring systems and decision diagrams.
arXiv Detail & Related papers (2025-02-07T19:43:34Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Revealing Unfair Models by Mining Interpretable Evidence [50.48264727620845]
The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications.
In this paper, we tackle the novel task of revealing unfair models by mining interpretable evidence.
Our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models.
arXiv Detail & Related papers (2022-07-12T20:03:08Z) - Partial Order in Chaos: Consensus on Feature Attributions in the
Rashomon Set [50.67431815647126]
Post-hoc global/local feature attribution methods are being progressively employed to understand machine learning models.
We show that partial orders of local/global feature importance arise from this methodology.
We show that every relation among features present in these partial orders also holds in the rankings provided by existing approaches.
arXiv Detail & Related papers (2021-10-26T02:53:14Z) - fairmodels: A Flexible Tool For Bias Detection, Visualization, And
Mitigation [3.548416925804316]
This article introduces an R package fairmodels that helps to validate fairness and eliminate bias in classification models.
The implemented set of functions and fairness metrics enables model fairness validation from different perspectives.
The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model.
arXiv Detail & Related papers (2021-04-01T15:06:13Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Towards Threshold Invariant Fair Classification [10.317169065327546]
This paper introduces the notion of threshold invariant fairness, which enforces equitable performances across different groups independent of the decision threshold.
Experimental results demonstrate that the proposed methodology is effective to alleviate the threshold sensitivity in machine learning models designed to achieve fairness.
arXiv Detail & Related papers (2020-06-18T16:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.