Reasonable Machines: A Research Manifesto
- URL: http://arxiv.org/abs/2008.06250v1
- Date: Fri, 14 Aug 2020 08:51:33 GMT
- Title: Reasonable Machines: A Research Manifesto
- Authors: Christoph Benzm\"uller and Bertram Lomfeld
- Abstract summary: A sound ecosystem of trust requires ways for autonomously justify their actions.
Building on social reasoning models such as moral and legal philosophy.
Enabling normative communication creates trust and opens new dimensions of AI application.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Future intelligent autonomous systems (IAS) are inevitably deciding on moral
and legal questions, e.g. in self-driving cars, health care or human-machine
collaboration. As decision processes in most modern sub-symbolic IAS are
hidden, the simple political plea for transparency, accountability and
governance falls short. A sound ecosystem of trust requires ways for IAS to
autonomously justify their actions, that is, to learn giving and taking reasons
for their decisions. Building on social reasoning models in moral psychology
and legal philosophy such an idea of >>Reasonable Machines<< requires novel,
hybrid reasoning tools, ethico-legal ontologies and associated argumentation
technology. Enabling machines to normative communication creates trust and
opens new dimensions of AI application and human-machine interaction.
Keywords: Trusthworthy and Explainable AI, Ethico-Legal Governors, Social
Reasoning Model, Pluralistic and Expressive Normative Reasoning
Related papers
- Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Does Explainable AI Have Moral Value? [0.0]
Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders.
Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism.
This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity.
arXiv Detail & Related papers (2023-11-05T15:59:27Z) - Beneficent Intelligence: A Capability Approach to Modeling Benefit,
Assistance, and Associated Moral Failures through AI Systems [12.239090962956043]
The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals.
We present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders.
arXiv Detail & Related papers (2023-08-01T22:38:14Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - Future Intelligent Autonomous Robots, Ethical by Design. Learning from
Autonomous Cars Ethics [19.911701966878947]
The field of ethics of intelligent autonomous robotic cars is a good example of research with actionable practical value.
It could be used as a starting platform for the approaches to the development of intelligent autonomous robots.
Drawing from our work on ethics of autonomous intelligent robocars, and the existing literature on ethics of robotics, our contribution consists of a set of values and ethical principles.
arXiv Detail & Related papers (2021-07-16T21:10:04Z) - Immune Moral Models? Pro-Social Rule Breaking as a Moral Enhancement
Approach for Ethical AI [0.17188280334580192]
Ethical behaviour is a critical characteristic that we would like in a human-centric AI.
To make AI agents more human centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules.
arXiv Detail & Related papers (2021-06-17T18:44:55Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.