Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations
- URL: http://arxiv.org/abs/2302.14326v2
- Date: Fri, 4 Aug 2023 19:11:15 GMT
- Title: Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations
- Authors: Tadayoshi Kohno, Yasemin Acar, Wulf Loh
- Abstract summary: We make and explore connections between moral questions in computer security research and ethics / moral philosophy.
We do not seek to define what is morally right or wrong, nor do we argue for one framework over another.
- Score: 14.120888473204907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The computer security research community regularly tackles ethical questions.
The field of ethics / moral philosophy has for centuries considered what it
means to be "morally good" or at least "morally allowed / acceptable". Among
philosophy's contributions are (1) frameworks for evaluating the morality of
actions -- including the well-established consequentialist and deontological
frameworks -- and (2) scenarios (like trolley problems) featuring moral
dilemmas that can facilitate discussion about and intellectual inquiry into
different perspectives on moral reasoning and decision-making. In a classic
trolley problem, consequentialist and deontological analyses may render
different opinions. In this research, we explicitly make and explore
connections between moral questions in computer security research and ethics /
moral philosophy through the creation and analysis of trolley problem-like
computer security-themed moral dilemmas and, in doing so, we seek to contribute
to conversations among security researchers about the morality of security
research-related decisions. We explicitly do not seek to define what is morally
right or wrong, nor do we argue for one framework over another. Indeed, the
consequentialist and deontological frameworks that we center, in addition to
coming to different conclusions for our scenarios, have significant
limitations. Instead, by offering our scenarios and by comparing two different
approaches to ethics, we strive to contribute to how the computer security
research field considers and converses about ethical questions, especially when
there are different perspectives on what is morally right or acceptable.
Related papers
- Exploring and steering the moral compass of Large Language Models [55.2480439325792]
Large Language Models (LLMs) have become central to advancing automation and decision-making across various sectors.
This study proposes a comprehensive comparative analysis of the most advanced LLMs to assess their moral profiles.
arXiv Detail & Related papers (2024-05-27T16:49:22Z) - What do we teach to engineering students: embedded ethics, morality, and
politics [0.0]
We propose a framework for integrating ethics modules in engineering curricula.
Our framework analytically decomposes an ethics module into three dimensions.
It provides analytic clarity, i.e. it enables course instructors to locate ethical dilemmas in either the moral or political realm.
arXiv Detail & Related papers (2024-02-05T09:37:52Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - On the Computational Complexity of Ethics: Moral Tractability for Minds
and Machines [0.0]
Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities.
This paper explores what kind of moral machines are possible based on what computational systems can or cannot do.
arXiv Detail & Related papers (2023-02-08T17:39:58Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via
Moral Discussions [71.25236662907056]
A moral dialogue system aligned with users' values could enhance conversation engagement and user connections.
We propose a framework, MoralDial, to train and evaluate moral dialogue systems.
arXiv Detail & Related papers (2022-12-21T02:21:37Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - AiSocrates: Towards Answering Ethical Quandary Questions [51.53350252548668]
AiSocrates is a system for deliberative exchange of different perspectives to an ethical quandary.
We show that AiSocrates generates promising answers to ethical quandary questions with multiple perspectives.
We argue that AiSocrates is a promising step toward developing an NLP system that incorporates human values explicitly by prompt instructions.
arXiv Detail & Related papers (2022-05-12T09:52:59Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Moral Dilemmas for Moral Machines [0.0]
I argue that this is a misapplication of philosophical thought experiments because it fails to appreciate the purpose of moral dilemmas.
There are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.
arXiv Detail & Related papers (2022-03-11T18:24:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.