Philosophical Specification of Empathetic Ethical Artificial
Intelligence
- URL: http://arxiv.org/abs/2107.10715v1
- Date: Thu, 22 Jul 2021 14:37:46 GMT
- Title: Philosophical Specification of Empathetic Ethical Artificial
Intelligence
- Authors: Michael Timothy Bennett, Yoshihiro Maruyama
- Abstract summary: An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, and infer intent.
We use enactivism, semiotics, perceptual symbol systems and symbol emergence to specify an agent.
It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In order to construct an ethical artificial intelligence (AI) two complex
problems must be overcome. Firstly, humans do not consistently agree on what is
or is not ethical. Second, contemporary AI and machine learning methods tend to
be blunt instruments which either search for solutions within the bounds of
predefined rules, or mimic behaviour. An ethical AI must be capable of
inferring unspoken rules, interpreting nuance and context, possess and be able
to infer intent, and explain not just its actions but its intent. Using
enactivism, semiotics, perceptual symbol systems and symbol emergence, we
specify an agent that learns not just arbitrary relations between signs but
their meaning in terms of the perceptual states of its sensorimotor system.
Subsequently it can learn what is meant by a sentence and infer the intent of
others in terms of its own experiences. It has malleable intent because the
meaning of symbols changes as it learns, and its intent is represented
symbolically as a goal. As such it may learn a concept of what is most likely
to be considered ethical by the majority within a population of humans, which
may then be used as a goal. The meaning of abstract symbols is expressed using
perceptual symbols of raw sensorimotor stimuli as the weakest (consistent with
Ockham's Razor) necessary and sufficient concept, an intensional definition
learned from an ostensive definition, from which the extensional definition or
category of all ethical decisions may be obtained. Because these abstract
symbols are the same for both situation and response, the same symbol is used
when either performing or observing an action. This is akin to mirror neurons
in the human brain. Mirror symbols may allow the agent to empathise, because
its own experiences are associated with the symbol, which is also associated
with the observation of another agent experiencing something that symbol
represents.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human
Activity Reasoning [58.5857133154749]
We propose a new symbolic system with broad-coverage symbols and rational rules.
We leverage the recent advancement of LLMs as an approximation of the two ideal properties.
Our method shows superiority in extensive activity understanding tasks.
arXiv Detail & Related papers (2023-11-29T05:27:14Z) - The Roles of Symbols in Neural-based AI: They are Not What You Think! [25.450989579215708]
We present a novel neuro-symbolic hypothesis and a plausible architecture for intelligent agents.
Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems.
arXiv Detail & Related papers (2023-04-26T15:33:41Z) - Existence and perception as the basis of AGI (Artificial General
Intelligence) [0.0]
AGI, unlike AI, should operate with meanings. And that's what distinguishes it from AI.
For AGI, which emulates human thinking, this ability is crucial.
Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not strict and formalized, so they cannot be programmed.
arXiv Detail & Related papers (2022-01-30T14:06:43Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable
and Advisable AI Systems [21.314210696069495]
We argue that the need for (human-understandable) symbols in human-AI interaction seems quite compelling.
In particular, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind.
This alone requires AI systems to at least do their I/O in symbolic terms.
arXiv Detail & Related papers (2021-09-21T01:30:06Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Intensional Artificial Intelligence: From Symbol Emergence to
Explainable and Empathetic AI [0.0]
We argue that an explainable artificial intelligence must possess a rationale for its decisions, be able to infer the purpose of observed behaviour, and be able to explain its decisions in the context of what its audience understands and intends.
To communicate that rationale requires natural language, a means of encoding and decoding perceptual states.
We propose a theory of meaning in which, to acquire language, an agent should model the world a language describes rather than the language itself.
arXiv Detail & Related papers (2021-04-23T13:13:46Z) - Symbolic Behaviour in Artificial Intelligence [8.849576130278157]
We argue that the path towards symbolically fluent AI begins with a reinterpretation of what symbols are.
We then outline how this interpretation unifies the behavioural traits humans exhibit when they use symbols.
We suggest that AI research explore social and cultural engagement as a tool to develop the cognitive machinery necessary for symbolic behaviour to emerge.
arXiv Detail & Related papers (2021-02-05T20:07:14Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.