Fairness Preferences, Actual and Hypothetical: A Study of Crowdworker
Incentives
- URL: http://arxiv.org/abs/2012.04216v1
- Date: Tue, 8 Dec 2020 05:00:57 GMT
- Title: Fairness Preferences, Actual and Hypothetical: A Study of Crowdworker
Incentives
- Authors: Angie Peng and Jeff Naecker and Ben Hutchinson and Andrew Smart and
Nyalleng Moorosi
- Abstract summary: This paper outlines a research program and experimental designs for investigating these questions.
The voting is hypothetical (not tied to an outcome) for half the group and actual (tied to the actual payment outcome) for the other half, so that we can understand the relation between a group's actual preferences and hypothetical (stated) preferences.
- Score: 1.854931308524932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How should we decide which fairness criteria or definitions to adopt in
machine learning systems? To answer this question, we must study the fairness
preferences of actual users of machine learning systems. Stringent parity
constraints on treatment or impact can come with trade-offs, and may not even
be preferred by the social groups in question (Zafar et al., 2017). Thus it
might be beneficial to elicit what the group's preferences are, rather than
rely on a priori defined mathematical fairness constraints. Simply asking for
self-reported rankings of users is challenging because research has shown that
there are often gaps between people's stated and actual preferences(Bernheim et
al., 2013).
This paper outlines a research program and experimental designs for
investigating these questions. Participants in the experiments are invited to
perform a set of tasks in exchange for a base payment--they are told upfront
that they may receive a bonus later on, and the bonus could depend on some
combination of output quantity and quality. The same group of workers then
votes on a bonus payment structure, to elicit preferences. The voting is
hypothetical (not tied to an outcome) for half the group and actual (tied to
the actual payment outcome) for the other half, so that we can understand the
relation between a group's actual preferences and hypothetical (stated)
preferences. Connections and lessons from fairness in machine learning are
explored.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
Homogenization? [90.35044668396591]
A recurring theme in machine learning is algorithmic monoculture: the same systems, or systems that share components, are deployed by multiple decision-makers.
We propose the component-sharing hypothesis: if decision-makers share components like training data or specific models, then they will produce more homogeneous outcomes.
We test this hypothesis on algorithmic fairness benchmarks, demonstrating that sharing training data reliably exacerbates homogenization.
We conclude with philosophical analyses of and societal challenges for outcome homogenization, with an eye towards implications for deployed machine learning systems.
arXiv Detail & Related papers (2022-11-25T09:33:11Z) - Equal Experience in Recommender Systems [21.298427869586686]
We introduce a novel fairness notion (that we call equal experience) to regulate unfairness in the presence of biased data.
We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization.
arXiv Detail & Related papers (2022-10-12T05:53:05Z) - Group Meritocratic Fairness in Linear Contextual Bandits [32.15680917495674]
We study the linear contextual bandit problem where an agent has to select one candidate from a pool and each candidate belongs to a sensitive group.
We propose a notion of fairness that states that the agent's policy is fair when it selects a candidate with highest relative rank.
arXiv Detail & Related papers (2022-06-07T09:54:38Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Bayes-Optimal Classifiers under Group Fairness [32.52143951145071]
This paper provides a unified framework for deriving Bayes-optimal classifiers under group fairness.
We propose a group-based thresholding method we call FairBayes, that can directly control disparity and achieve an essentially optimal fairness-accuracy tradeoff.
arXiv Detail & Related papers (2022-02-20T03:35:44Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Intersectional Affirmative Action Policies for Top-k Candidates
Selection [3.4961413413444817]
We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
arXiv Detail & Related papers (2020-07-29T12:27:18Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.