Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
- URL: http://arxiv.org/abs/2002.01621v1
- Date: Wed, 5 Feb 2020 03:31:48 GMT
- Title: Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
- Authors: Yunfeng Zhang, Rachel K. E. Bellamy, Kush R. Varshney
- Abstract summary: We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, it is key to acquire and adhere to human policy makers' preferences on how to make the tradeoff among these objectives.
We propose a framework and some exemplar methods for eliciting such preferences and for optimizing an AI model according to these preferences.
- Score: 45.04980664450894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today, AI is increasingly being used in many high-stakes decision-making
applications in which fairness is an important concern. Already, there are many
examples of AI being biased and making questionable and unfair decisions. The
AI research community has proposed many methods to measure and mitigate
unwanted biases, but few of them involve inputs from human policy makers. We
argue that because different fairness criteria sometimes cannot be
simultaneously satisfied, and because achieving fairness often requires
sacrificing other objectives such as model accuracy, it is key to acquire and
adhere to human policy makers' preferences on how to make the tradeoff among
these objectives. In this paper, we propose a framework and some exemplar
methods for eliciting such preferences and for optimizing an AI model according
to these preferences.
Related papers
- Beyond Preferences in AI Alignment [15.878773061188516]
We characterize and challenge the preferentist approach to AI alignment.
We show how preferences fail to capture the thick semantic content of human values.
We argue that AI systems should be aligned with normative standards appropriate to their social roles.
arXiv Detail & Related papers (2024-08-30T03:14:20Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - The Fairness Fair: Bringing Human Perception into Collective
Decision-Making [16.300744216179545]
We argue that not only fair solutions should be deemed desirable by social planners (designers), but they should be governed by human and societal cognition.
We discuss how achieving this goal requires a broad transdisciplinary approach ranging from computing and AI to behavioral economics and human-AI interaction.
arXiv Detail & Related papers (2023-12-22T03:06:24Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - AI Fairness: from Principles to Practice [0.0]
This paper summarizes and evaluates various approaches, methods, and techniques for pursuing fairness in AI systems.
It proposes practical guidelines for defining, measuring, and preventing bias in AI.
arXiv Detail & Related papers (2022-07-20T11:37:46Z) - Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have
to Act Randomly and Society Seems to Accept This [0.8889304968879161]
We feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles.
Yet a decision-maker can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making.
arXiv Detail & Related papers (2021-11-15T05:39:02Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.