Metaethical Perspectives on 'Benchmarking' AI Ethics
- URL: http://arxiv.org/abs/2204.05151v1
- Date: Mon, 11 Apr 2022 14:36:39 GMT
- Title: Metaethical Perspectives on 'Benchmarking' AI Ethics
- Authors: Travis LaCroix, Alexandra Sasha Luccioni
- Abstract summary: Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
- Score: 81.65697003067841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benchmarks are seen as the cornerstone for measuring technical progress in
Artificial Intelligence (AI) research and have been developed for a variety of
tasks ranging from question answering to facial recognition. An increasingly
prominent research area in AI is ethics, which currently has no set of
benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI
system. In this paper, drawing upon research in moral philosophy and
metaethics, we argue that it is impossible to develop such a benchmark. As
such, alternative mechanisms are necessary for evaluating whether an AI system
is 'ethical'. This is especially pressing in light of the prevalence of
applied, industrial AI research. We argue that it makes more sense to talk
about 'values' (and 'value alignment') rather than 'ethics' when considering
the possible actions of present and future AI systems. We further highlight
that, because values are unambiguously relative, focusing on values forces us
to consider explicitly what the values are and whose values they are. Shifting
the emphasis from ethics to values therefore gives rise to several new ways of
understanding how researchers might advance research programmes for robustly
safe or beneficial AI. We conclude by highlighting a number of possible ways
forward for the field as a whole, and we advocate for different approaches
towards more value-aligned AI research.
Related papers
- Can Artificial Intelligence Embody Moral Values? [0.0]
neutrality thesis holds that technology cannot be laden with values.
In this paper, we argue that artificial intelligence, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis.
Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm.
arXiv Detail & Related papers (2024-08-22T09:39:16Z) - Towards a Feminist Metaethics of AI [0.0]
I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI.
Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.
arXiv Detail & Related papers (2023-11-10T13:26:45Z) - Ethics in AI through the Practitioner's View: A Grounded Theory
Literature Review [12.941478155592502]
In recent years, numerous incidents have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives.
We conducted a grounded theory literature review (GTLR) of 38 primary empirical studies that included AI practitioners' views on ethics in AI.
We present a taxonomy of ethics in AI from practitioners' viewpoints to assist AI practitioners in identifying and understanding the different aspects of AI ethics.
arXiv Detail & Related papers (2022-06-20T00:28:51Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Modelos din\^amicos aplicados \`a aprendizagem de valores em
intelig\^encia artificial [0.0]
Several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment.
It is utmost importance that artificial intelligent agents have their values aligned with human values.
Perhaps this difficulty comes from the way we are addressing the problem of expressing values using cognitive methods.
arXiv Detail & Related papers (2020-07-30T00:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.