Beyond Bias and Compliance: Towards Individual Agency and Plurality of
Ethics in AI
- URL: http://arxiv.org/abs/2302.12149v1
- Date: Thu, 23 Feb 2023 16:33:40 GMT
- Title: Beyond Bias and Compliance: Towards Individual Agency and Plurality of
Ethics in AI
- Authors: Thomas Krendl Gilbert, Megan Welle Brozek, Andrew Brozek
- Abstract summary: We argue that the way data is labeled plays an essential role in the way AI behaves.
We propose an alternative path that allows for the plurality of values and the freedom of individual expression.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI ethics is an emerging field with multiple, competing narratives about how
to best solve the problem of building human values into machines. Two major
approaches are focused on bias and compliance, respectively. But neither of
these ideas fully encompasses ethics: using moral principles to decide how to
act in a particular situation. Our method posits that the way data is labeled
plays an essential role in the way AI behaves, and therefore in the ethics of
machines themselves. The argument combines a fundamental insight from ethics
(i.e. that ethics is about values) with our practical experience building and
scaling machine learning systems. We want to build AI that is actually ethical
by first addressing foundational concerns: how to build good systems, how to
define what is good in relation to system architecture, and who should provide
that definition.
Building ethical AI creates a foundation of trust between a company and the
users of that platform. But this trust is unjustified unless users experience
the direct value of ethical AI. Until users have real control over how
algorithms behave, something is missing in current AI solutions. This causes
massive distrust in AI, and apathy towards AI ethics solutions. The scope of
this paper is to propose an alternative path that allows for the plurality of
values and the freedom of individual expression. Both are essential for
realizing true moral character.
Related papers
- Why should we ever automate moral decision making? [30.428729272730727]
Concerns arise when AI is involved in decisions with significant moral implications.
Moral reasoning lacks a broadly accepted framework.
An alternative approach involves AI learning from human moral decisions.
arXiv Detail & Related papers (2024-07-10T13:59:22Z) - Towards a Feminist Metaethics of AI [0.0]
I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI.
Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.
arXiv Detail & Related papers (2023-11-10T13:26:45Z) - If our aim is to build morality into an artificial agent, how might we
begin to go about doing so? [0.0]
We discuss the different aspects that should be considered when building moral agents, including the most relevant moral paradigms and challenges.
We propose solutions including a hybrid approach to design and a hierarchical approach to combining moral paradigms.
arXiv Detail & Related papers (2023-10-12T12:56:12Z) - AI Ethics Issues in Real World: Evidence from AI Incident Database [0.6091702876917279]
We identify 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead.
Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm.
arXiv Detail & Related papers (2022-06-15T16:25:57Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - From the Ground Truth Up: Doing AI Ethics from Practice to Principles [0.0]
Recent AI ethics has focused on applying abstract principles downward to practice.
This paper moves in the other direction.
Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems.
arXiv Detail & Related papers (2022-01-05T15:33:33Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.