What do we teach to engineering students: embedded ethics, morality, and
politics
- URL: http://arxiv.org/abs/2402.02831v1
- Date: Mon, 5 Feb 2024 09:37:52 GMT
- Title: What do we teach to engineering students: embedded ethics, morality, and
politics
- Authors: Avigail Ferdman and Emanuele Ratti
- Abstract summary: We propose a framework for integrating ethics modules in engineering curricula.
Our framework analytically decomposes an ethics module into three dimensions.
It provides analytic clarity, i.e. it enables course instructors to locate ethical dilemmas in either the moral or political realm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past few years, calls for integrating ethics modules in engineering
curricula have multiplied. Despite this positive trend, a number of issues with
these embedded programs remains. First, learning goals are underspecified. A
second limitation is the conflation of different dimensions under the same
banner, in particular confusion between ethics curricula geared towards
addressing the ethics of individual conduct and curricula geared towards
addressing ethics at the societal level. In this article, we propose a
tripartite framework to overcome these difficulties. Our framework analytically
decomposes an ethics module into three dimensions. First, there is the ethical
dimension, which pertains to the learning goals. Second, there is the moral
dimension, which addresses the moral relevance of engineers conduct. Finally,
there is the political dimension, which scales up issues of moral relevance at
the civic level. All in all, our framework has two advantages. First, it
provides analytic clarity, i.e. it enables course instructors to locate ethical
dilemmas in either the moral or political realm and to make use of the tools
and resources from moral and political philosophy. Second, it depicts a
comprehensive ethical training, which enables students to both reason about
moral issues in the abstract, and to socially contextualize potential
solutions.
Related papers
- Exploring and steering the moral compass of Large Language Models [55.2480439325792]
Large Language Models (LLMs) have become central to advancing automation and decision-making across various sectors.
This study proposes a comprehensive comparative analysis of the most advanced LLMs to assess their moral profiles.
arXiv Detail & Related papers (2024-05-27T16:49:22Z) - Quelle {é}thique pour quelle IA ? [0.0]
This study proposes an analysis of the different types of ethical approaches involved in the ethics of AI.
The author introduces to the contemporary need for and meaning of ethics, distinguishes it from other registers of normativities and underlines its inadequacy to formalization.
The study concludes with a reflection on the reasons why a human ethics of AI based on a pragmatic practice of contextual ethics remains necessary and irreducible to any formalization or automated treatment of the ethical questions that arise for humans.
arXiv Detail & Related papers (2024-05-21T08:13:02Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Introduction to ethics in the age of digital communication [1.922823221013346]
This article serves as an introduction to ethics in the field of digital communication.
It gives a brief overview of applied ethics as a practical sub-field of ethics.
The article also discusses the ways in which the nature of ethics in the field of communication has been changing, and the impact of emerging technology on these changes.
arXiv Detail & Related papers (2023-08-28T09:03:15Z) - Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations [14.120888473204907]
We make and explore connections between moral questions in computer security research and ethics / moral philosophy.
We do not seek to define what is morally right or wrong, nor do we argue for one framework over another.
arXiv Detail & Related papers (2023-02-28T05:39:17Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - AiSocrates: Towards Answering Ethical Quandary Questions [51.53350252548668]
AiSocrates is a system for deliberative exchange of different perspectives to an ethical quandary.
We show that AiSocrates generates promising answers to ethical quandary questions with multiple perspectives.
We argue that AiSocrates is a promising step toward developing an NLP system that incorporates human values explicitly by prompt instructions.
arXiv Detail & Related papers (2022-05-12T09:52:59Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - A Word on Machine Ethics: A Response to Jiang et al. (2021) [36.955224006838584]
We focus on a single case study of the recently proposed Delphi model and offer a critique of the project's proposed method of automating morality judgments.
We conclude with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology.
arXiv Detail & Related papers (2021-11-07T19:31:51Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Reinforcement Learning Under Moral Uncertainty [13.761051314923634]
An ambitious goal for machine learning is to create agents that behave ethically.
While ethical agents could be trained by rewarding correct behavior under a specific moral theory, there remains widespread disagreement about the nature of morality.
This paper proposes two training methods that realize different points among competing desiderata, and trains agents in simple environments to act under moral uncertainty.
arXiv Detail & Related papers (2020-06-08T16:40:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.