Moral Machine or Tyranny of the Majority?
- URL: http://arxiv.org/abs/2305.17319v1
- Date: Sat, 27 May 2023 00:49:11 GMT
- Title: Moral Machine or Tyranny of the Majority?
- Authors: Michael Feffer, Hoda Heidari, and Zachary C. Lipton
- Abstract summary: In Moral Machine project, researchers crowdsourced answers to "Trolley Problems" concerning autonomous vehicles.
Noothigattu et al. proposed inferring linear functions that approximate each individual's preferences and aggregating these linear models by averaging parameters across the population.
In this paper, we examine this averaging mechanism, focusing on fairness concerns in the presence of strategic effects.
- Score: 25.342029326793917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With Artificial Intelligence systems increasingly applied in consequential
domains, researchers have begun to ask how these systems ought to act in
ethically charged situations where even humans lack consensus. In the Moral
Machine project, researchers crowdsourced answers to "Trolley Problems"
concerning autonomous vehicles. Subsequently, Noothigattu et al. (2018)
proposed inferring linear functions that approximate each individual's
preferences and aggregating these linear models by averaging parameters across
the population. In this paper, we examine this averaging mechanism, focusing on
fairness concerns in the presence of strategic effects. We investigate a simple
setting where the population consists of two groups, with the minority
constituting an {\alpha} < 0.5 share of the population. To simplify the
analysis, we consider the extreme case in which within-group preferences are
homogeneous. Focusing on the fraction of contested cases where the minority
group prevails, we make the following observations: (a) even when all parties
report their preferences truthfully, the fraction of disputes where the
minority prevails is less than proportionate in {\alpha}; (b) the degree of
sub-proportionality grows more severe as the level of disagreement between the
groups increases; (c) when parties report preferences strategically, pure
strategy equilibria do not always exist; and (d) whenever a pure strategy
equilibrium exists, the majority group prevails 100% of the time. These
findings raise concerns about stability and fairness of preference vector
averaging as a mechanism for aggregating diverging voices. Finally, we discuss
alternatives, including randomized dictatorship and median-based mechanisms.
Related papers
- Long-Term Fairness in Sequential Multi-Agent Selection with Positive Reinforcement [21.44063458579184]
In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback.
We propose the Multi-agent Fair-Greedy policy, which balances greedy score and fairness.
Our results indicate that, while positive reinforcement is a promising mechanism for long-term fairness, policies must be designed carefully to be robust to variations in the evolution model.
arXiv Detail & Related papers (2024-07-10T04:03:23Z) - How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance [64.1656365676171]
Group imbalance has been a known problem in empirical risk minimization.
This paper quantifies the impact of individual groups on the sample complexity, the convergence rate, and the average and group-level testing performance.
arXiv Detail & Related papers (2024-03-12T04:38:05Z) - Average-Case Analysis of Iterative Voting [33.68929251752289]
Iterative voting is a natural model of repeated strategic decision-making in social choice.
Prior work has analyzed the efficacy of iterative plurality on the welfare of the chosen outcome at equilibrium.
This work extends average-case analysis to a wider class of distributions and distinguishes when iterative plurality improves or degrades welfare.
arXiv Detail & Related papers (2024-02-13T00:46:46Z) - Adversarial Reweighting Guided by Wasserstein Distance for Bias
Mitigation [24.160692009892088]
Under-representation of minorities in the data makes the disparate treatment of subpopulations difficult to deal with during learning.
We propose a novel adversarial reweighting method to address such emphrepresentation bias.
arXiv Detail & Related papers (2023-11-21T15:46:11Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Improving Fairness of AI Systems with Lossless De-biasing [15.039284892391565]
Mitigating bias in AI systems to increase overall fairness has emerged as an important challenge.
We present an information-lossless de-biasing technique that targets the scarcity of data in the disadvantaged group.
arXiv Detail & Related papers (2021-05-10T17:38:38Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Selective Classification Can Magnify Disparities Across Groups [89.14499988774985]
We find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities.
Increasing abstentions can even decrease accuracies on some groups.
We train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group.
arXiv Detail & Related papers (2020-10-27T08:51:30Z) - On Fair Selection in the Presence of Implicit Variance [17.517529275692322]
We argue that even in the absence of implicit bias, the estimates of candidates' quality from different groups may differ in another fundamental way, namely, in their variance.
We propose a simple model in which candidates have a true latent quality that is drawn from a group-independent normal distribution.
We show that the demographic parity mechanism always increases the selection utility, while any $gamma$-rule weakly increases it.
arXiv Detail & Related papers (2020-06-24T13:08:31Z) - Public Bayesian Persuasion: Being Almost Optimal and Almost Persuasive [57.47546090379434]
We study the public persuasion problem in the general setting with: (i) arbitrary state spaces; (ii) arbitrary action spaces; (iii) arbitrary sender's utility functions.
We provide a quasi-polynomial time bi-criteria approximation algorithm for arbitrary public persuasion problems that, in specific settings, yields a QPTAS.
arXiv Detail & Related papers (2020-02-12T18:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.