Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have
to Act Randomly and Society Seems to Accept This
- URL: http://arxiv.org/abs/2111.07545v1
- Date: Mon, 15 Nov 2021 05:39:02 GMT
- Title: Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have
to Act Randomly and Society Seems to Accept This
- Authors: G\'abor Erd\'elyi, Olivia J. Erd\'elyi, and Vladimir Estivill-Castro
- Abstract summary: We feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles.
Yet a decision-maker can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making.
- Score: 0.8889304968879161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As \emph{artificial intelligence} (AI) systems are increasingly involved in
decisions affecting our lives, ensuring that automated decision-making is fair
and ethical has become a top priority. Intuitively, we feel that akin to human
decisions, judgments of artificial agents should necessarily be grounded in
some moral principles. Yet a decision-maker (whether human or artificial) can
only make truly ethical (based on any ethical theory) and fair (according to
any notion of fairness) decisions if full information on all the relevant
factors on which the decision is based are available at the time of
decision-making. This raises two problems: (1) In settings, where we rely on AI
systems that are using classifiers obtained with supervised learning, some
induction/generalization is present and some relevant attributes may not be
present even during learning. (2) Modeling such decisions as games reveals that
any -- however ethical -- pure strategy is inevitably susceptible to
exploitation.
Moreover, in many games, a Nash Equilibrium can only be obtained by using
mixed strategies, i.e., to achieve mathematically optimal outcomes, decisions
must be randomized. In this paper, we argue that in supervised learning
settings, there exist random classifiers that perform at least as well as
deterministic classifiers, and may hence be the optimal choice in many
circumstances. We support our theoretical results with an empirical study
indicating a positive societal attitude towards randomized artificial
decision-makers, and discuss some policy and implementation issues related to
the use of random classifiers that relate to and are relevant for current AI
policy and standardization initiatives.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies [0.43981305860983716]
We show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone.
We find that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail.
arXiv Detail & Related papers (2024-03-18T01:04:52Z) - Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions [80.34972679938483]
We introduce Conformal Decision Theory, a framework for producing safe autonomous decisions despite imperfect machine learning predictions.
Decisions produced by our algorithms are safe in the sense that they come with provable statistical guarantees of having low risk.
Experiments demonstrate the utility of our approach in robot motion planning around humans, automated stock trading, and robot manufacturing.
arXiv Detail & Related papers (2023-10-09T17:59:30Z) - Fairness Implications of Heterogeneous Treatment Effect Estimation with
Machine Learning Methods in Policy-making [0.0]
We argue that standard AI Fairness approaches for predictive machine learning are not suitable for all causal machine learning applications.
We argue that policy-making is best seen as a joint decision where the causal machine learning model usually only has indirect power.
arXiv Detail & Related papers (2023-09-02T03:06:14Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Why we need biased AI -- How including cognitive and ethical machine
biases can enhance AI systems [0.0]
We argue for the structurewise implementation of human cognitive biases in learning algorithms.
In order to achieve ethical machine behavior, filter mechanisms have to be applied.
This paper is the first tentative step to explicitly pursue the idea of a re-evaluation of the ethical significance of machine biases.
arXiv Detail & Related papers (2022-03-18T12:39:35Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.