Towards AI Logic for Social Reasoning
- URL: http://arxiv.org/abs/2110.04452v1
- Date: Sat, 9 Oct 2021 04:35:23 GMT
- Title: Towards AI Logic for Social Reasoning
- Authors: Huimin Dong, R\'eka Markovich and Leendert van der Torre
- Abstract summary: We discuss how an argumentation-based AI logic could be used to formalize important aspects of social reasoning.
We discuss four aspects of social AI logic. First, we discuss how rights represent relations between the obligations and permissions of intelligent agents.
Second, we discuss how to argue about the right-to-know, a central issue in the recent discussion of privacy and ethics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence (AI) logic formalizes the reasoning of intelligent
agents. In this paper, we discuss how an argumentation-based AI logic could be
used also to formalize important aspects of social reasoning. Besides reasoning
about the knowledge and actions of individual agents, social AI logic can
reason also about social dependencies among agents using the rights,
obligations and permissions of the agents. We discuss four aspects of social AI
logic. First, we discuss how rights represent relations between the obligations
and permissions of intelligent agents. Second, we discuss how to argue about
the right-to-know, a central issue in the recent discussion of privacy and
ethics. Third, we discuss how a wide variety of conflicts among intelligent
agents can be identified and (sometimes) resolved by comparing formal
arguments. Importantly, to cover a wide range of arguments occurring in daily
life, also fallacious arguments can be represented and reasoned about. Fourth,
we discuss how to argue about the freedom to act for intelligent agents.
Examples from social, legal and ethical reasoning highlight the challenges in
developing social AI logic. The discussion of the four challenges leads to a
research program for argumentation-based social AI logic, contributing towards
the future development of AI logic.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Le Nozze di Giustizia. Interactions between Artificial Intelligence,
Law, Logic, Language and Computation with some case studies in Traffic
Regulations and Health Care [0.0]
An important aim of this paper is to convey some basics of mathematical logic to the legal community working with Artificial Intelligence.
After analysing what AI is, we decide to delimit ourselves to rule-based AI leaving Neural Networks and Machine Learning aside.
We will see how mathematical logic interacts with legal rule-based AI practice.
arXiv Detail & Related papers (2024-02-09T15:43:31Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - Arguments about Highly Reliable Agent Designs as a Useful Path to
Artificial Intelligence Safety [0.0]
Highly Reliable Agent Designs (HRAD) is one of the most controversial and ambitious approaches.
We have titled the arguments (1) incidental utility, (2) deconfusion, (3) precise specification, and (4) prediction.
We have explained the assumptions and claims based on a review of published and informal literature, along with experts who have stated positions on the topic.
arXiv Detail & Related papers (2022-01-09T07:42:37Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Immune Moral Models? Pro-Social Rule Breaking as a Moral Enhancement
Approach for Ethical AI [0.17188280334580192]
Ethical behaviour is a critical characteristic that we would like in a human-centric AI.
To make AI agents more human centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules.
arXiv Detail & Related papers (2021-06-17T18:44:55Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.