Ethics of Technology needs more Political Philosophy
- URL: http://arxiv.org/abs/2001.03511v1
- Date: Fri, 10 Jan 2020 15:27:02 GMT
- Title: Ethics of Technology needs more Political Philosophy
- Authors: Johannes Himmelreich
- Abstract summary: The ongoing debate on the ethics of self-driving cars typically focuses on two approaches to answering ethical questions: moral philosophy and social science.
Political philosophy adds three basic concerns to our conceptual toolkit: reasonable pluralism, human agency, and legitimacy.
These three concerns have so far been largely overlooked in the debate on the ethics of self-driving cars.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ongoing debate on the ethics of self-driving cars typically focuses on
two approaches to answering ethical questions: moral philosophy and social
science. I argue that these two approaches are both lacking. We should neither
deduce answers from individual moral theories nor should we expect social
science to give us complete answers. To supplement these approaches, we should
turn to political philosophy. The issues we face are collective decisions that
we make together rather than individual decisions we make in light of what we
each have reason to value. Political philosophy adds three basic concerns to
our conceptual toolkit: reasonable pluralism, human agency, and legitimacy.
These three concerns have so far been largely overlooked in the debate on the
ethics of self-driving cars.
Related papers
- Quelle {é}thique pour quelle IA ? [0.0]
This study proposes an analysis of the different types of ethical approaches involved in the ethics of AI.
The author introduces to the contemporary need for and meaning of ethics, distinguishes it from other registers of normativities and underlines its inadequacy to formalization.
The study concludes with a reflection on the reasons why a human ethics of AI based on a pragmatic practice of contextual ethics remains necessary and irreducible to any formalization or automated treatment of the ethical questions that arise for humans.
arXiv Detail & Related papers (2024-05-21T08:13:02Z) - Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis [0.0]
We give two competing visions of what it means to be an (ethical) agent.
We argue that in the context of ethically-significant behavior, AI should be viewed not as an agent but as the outcome of political processes.
arXiv Detail & Related papers (2024-04-22T04:19:24Z) - What do we teach to engineering students: embedded ethics, morality, and
politics [0.0]
We propose a framework for integrating ethics modules in engineering curricula.
Our framework analytically decomposes an ethics module into three dimensions.
It provides analytic clarity, i.e. it enables course instructors to locate ethical dilemmas in either the moral or political realm.
arXiv Detail & Related papers (2024-02-05T09:37:52Z) - If our aim is to build morality into an artificial agent, how might we
begin to go about doing so? [0.0]
We discuss the different aspects that should be considered when building moral agents, including the most relevant moral paradigms and challenges.
We propose solutions including a hybrid approach to design and a hierarchical approach to combining moral paradigms.
arXiv Detail & Related papers (2023-10-12T12:56:12Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations [14.120888473204907]
We make and explore connections between moral questions in computer security research and ethics / moral philosophy.
We do not seek to define what is morally right or wrong, nor do we argue for one framework over another.
arXiv Detail & Related papers (2023-02-28T05:39:17Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - AiSocrates: Towards Answering Ethical Quandary Questions [51.53350252548668]
AiSocrates is a system for deliberative exchange of different perspectives to an ethical quandary.
We show that AiSocrates generates promising answers to ethical quandary questions with multiple perspectives.
We argue that AiSocrates is a promising step toward developing an NLP system that incorporates human values explicitly by prompt instructions.
arXiv Detail & Related papers (2022-05-12T09:52:59Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.