Diversity and Inclusion Metrics in Subset Selection
- URL: http://arxiv.org/abs/2002.03256v1
- Date: Sun, 9 Feb 2020 00:29:40 GMT
- Title: Diversity and Inclusion Metrics in Subset Selection
- Authors: Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben
Hutchinson, Alex Hanna, Timnit Gebru, Jamie Morgenstern
- Abstract summary: ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives.
We introduce metrics based on these concepts, which can be applied together, separately, and in tandem with additional fairness constraints.
Social choice methods can additionally be leveraged to aggregate and choose preferable sets, and we detail how these may be applied.
- Score: 17.79121536725958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ethical concept of fairness has recently been applied in machine learning
(ML) settings to describe a wide range of constraints and objectives. When
considering the relevance of ethical concepts to subset selection problems, the
concepts of diversity and inclusion are additionally applicable in order to
create outputs that account for social power and access differentials. We
introduce metrics based on these concepts, which can be applied together,
separately, and in tandem with additional fairness constraints. Results from
human subject experiments lend support to the proposed criteria. Social choice
methods can additionally be leveraged to aggregate and choose preferable sets,
and we detail how these may be applied.
Related papers
- Social Choice for Heterogeneous Fairness in Recommendation [9.753088666705985]
Algorithmic fairness in recommender systems requires close attention to the needs of a diverse set of stakeholders.
Previous work has often been limited by fixed, single-objective definitions of fairness.
Our work approaches recommendation fairness from the standpoint of computational social choice.
arXiv Detail & Related papers (2024-10-06T17:01:18Z) - Multi-Target Multiplicity: Flexibility and Fairness in Target
Specification under Resource Constraints [76.84999501420938]
We introduce a conceptual and computational framework for assessing how the choice of target affects individuals' outcomes.
We show that the level of multiplicity that stems from target variable choice can be greater than that stemming from nearly-optimal models of a single target.
arXiv Detail & Related papers (2023-06-23T18:57:14Z) - Achieving Diversity in Counterfactual Explanations: a Review and
Discussion [3.6066164404432883]
In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model.
This paper proposes a review of the numerous, sometimes conflicting, definitions that have been proposed for this notion of diversity.
arXiv Detail & Related papers (2023-05-10T02:09:19Z) - Group Fairness in Prediction-Based Decision Making: From Moral
Assessment to Implementation [0.0]
We introduce a framework for the moral assessment of what fairness means in a given context.
We map the assessment's results to established statistical group fairness criteria.
We extend the FEC principle to cover all types of group fairness criteria.
arXiv Detail & Related papers (2022-10-19T10:44:21Z) - The Minority Matters: A Diversity-Promoting Collaborative Metric
Learning Algorithm [154.47590401735323]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2022-09-30T08:02:18Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Diversity in Sociotechnical Machine Learning Systems [2.9973947110286163]
There has been a surge of recent interest in sociocultural diversity in machine learning (ML) research.
We present a taxonomy of different diversity concepts from philosophy of science, and explicate the distinct rationales underlying these concepts.
We provide an overview of mechanisms by which diversity can benefit group performance.
arXiv Detail & Related papers (2021-07-19T21:26:38Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.