False consensus biases AI against vulnerable stakeholders
- URL: http://arxiv.org/abs/2407.12143v1
- Date: Fri, 17 May 2024 14:33:47 GMT
- Title: False consensus biases AI against vulnerable stakeholders
- Authors: Mengchen Dong, Jean-François Bonnefon, Iyad Rahwan,
- Abstract summary: We explore the public acceptability of speed-accuracy trade-offs in populations of claimants and non-claimants.
We observe a general willingness to trade off speed gains for modest accuracy losses, but this aggregate view masks notable divergences between claimants and non-claimants.
- Score: 0.8754707197790144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The deployment of AI systems for welfare benefit allocation allows for accelerated decision-making and faster provision of critical help, but has already led to an increase in unfair benefit denials and false fraud accusations. Collecting data in the US and the UK (N = 2449), we explore the public acceptability of such speed-accuracy trade-offs in populations of claimants and non-claimants. We observe a general willingness to trade off speed gains for modest accuracy losses, but this aggregate view masks notable divergences between claimants and non-claimants. Although welfare claimants comprise a relatively small proportion of the general population (e.g., 20% in the US representative sample), this vulnerable group is much less willing to accept AI deployed in welfare systems, raising concerns that solely using aggregate data for calibration could lead to policies misaligned with stakeholder preferences. Our study further uncovers asymmetric insights between claimants and non-claimants. The latter consistently overestimate claimant willingness to accept speed-accuracy trade-offs, even when financially incentivized for accurate perspective-taking. This suggests that policy decisions influenced by the dominant voice of non-claimants, however well-intentioned, may neglect the actual preferences of those directly affected by welfare AI systems. Our findings underline the need for stakeholder engagement and transparent communication in the design and deployment of these systems, particularly in contexts marked by power imbalances.
Related papers
- Trading off performance and human oversight in algorithmic policy: evidence from Danish college admissions [11.378331161188022]
Student dropout is a significant concern for educational institutions.
We show that sequential AI models offer more precise and fair predictions.
We estimate that even the use of simple AI models to guide admissions decisions could yield significant economic benefits.
arXiv Detail & Related papers (2024-11-22T21:12:54Z) - Long-Term Fairness in Sequential Multi-Agent Selection with Positive Reinforcement [21.44063458579184]
In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback.
We propose the Multi-agent Fair-Greedy policy, which balances greedy score and fairness.
Our results indicate that, while positive reinforcement is a promising mechanism for long-term fairness, policies must be designed carefully to be robust to variations in the evolution model.
arXiv Detail & Related papers (2024-07-10T04:03:23Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Assessing Group Fairness with Social Welfare Optimization [0.9217021281095907]
This paper explores whether a broader conception of social justice, based on optimizing a social welfare function, can be useful for assessing various definitions of parity.
We show that it can justify demographic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity.
In addition, we find that predictive rate parity is of limited usefulness.
arXiv Detail & Related papers (2024-05-19T01:41:04Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Incentivized Communication for Federated Bandits [67.4682056391551]
We introduce an incentivized communication problem for federated bandits, where the server shall motivate clients to share data by providing incentives.
We propose the first incentivized communication protocol, namely, Inc-FedUCB, that achieves near-optimal regret with provable communication and incentive cost guarantees.
arXiv Detail & Related papers (2023-09-21T00:59:20Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Improving Fairness of AI Systems with Lossless De-biasing [15.039284892391565]
Mitigating bias in AI systems to increase overall fairness has emerged as an important challenge.
We present an information-lossless de-biasing technique that targets the scarcity of data in the disadvantaged group.
arXiv Detail & Related papers (2021-05-10T17:38:38Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.