Intensional Artificial Intelligence: From Symbol Emergence to
Explainable and Empathetic AI
- URL: http://arxiv.org/abs/2104.11573v1
- Date: Fri, 23 Apr 2021 13:13:46 GMT
- Title: Intensional Artificial Intelligence: From Symbol Emergence to
Explainable and Empathetic AI
- Authors: Michael Timothy Bennett, Yoshihiro Maruyama
- Abstract summary: We argue that an explainable artificial intelligence must possess a rationale for its decisions, be able to infer the purpose of observed behaviour, and be able to explain its decisions in the context of what its audience understands and intends.
To communicate that rationale requires natural language, a means of encoding and decoding perceptual states.
We propose a theory of meaning in which, to acquire language, an agent should model the world a language describes rather than the language itself.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We argue that an explainable artificial intelligence must possess a rationale
for its decisions, be able to infer the purpose of observed behaviour, and be
able to explain its decisions in the context of what its audience understands
and intends. To address these issues we present four novel contributions.
Firstly, we define an arbitrary task in terms of perceptual states, and discuss
two extremes of a domain of possible solutions. Secondly, we define the
intensional solution. Optimal by some definitions of intelligence, it describes
the purpose of a task. An agent possessed of it has a rationale for its
decisions in terms of that purpose, expressed in a perceptual symbol system
grounded in hardware. Thirdly, to communicate that rationale requires natural
language, a means of encoding and decoding perceptual states. We propose a
theory of meaning in which, to acquire language, an agent should model the
world a language describes rather than the language itself. If the utterances
of humans are of predictive value to the agent's goals, then the agent will
imbue those utterances with meaning in terms of its own goals and perceptual
states. In the context of Peircean semiotics, a community of agents must share
rough approximations of signs, referents and interpretants in order to
communicate. Meaning exists only in the context of intent, so to communicate
with humans an agent must have comparable experiences and goals. An agent that
learns intensional solutions, compelled by objective functions somewhat
analogous to human motivators such as hunger and pain, may be capable of
explaining its rationale not just in terms of its own intent, but in terms of
what its audience understands and intends. It forms some approximation of the
perceptual states of humans.
Related papers
- Situated Instruction Following [87.37244711380411]
We propose situated instruction following, which embraces the inherent underspecification and ambiguity of real-world communication.
The meaning of situated instructions naturally unfold through the past actions and the expected future behaviors of the human involved.
Our experiments indicate that state-of-the-art Embodied Instruction Following (EIF) models lack holistic understanding of situated human intention.
arXiv Detail & Related papers (2024-07-15T19:32:30Z) - The Reasons that Agents Act: Intention and Instrumental Goals [24.607124467778036]
There is no universally accepted theory of intention applicable to AI agents.
We operationalise the intention with which an agent acts, relating to the reasons it chooses its decision.
Our definition captures the intuitive notion of intent and satisfies desiderata set-out by past work.
arXiv Detail & Related papers (2024-02-11T14:39:40Z) - (Ir)rationality in AI: State of the Art, Research Challenges and Open
Questions [2.9008806248012333]
The concept of rationality is central to the field of artificial intelligence.
There is no unified definition of what constitutes a rational agent.
We consider irrational behaviours that can prove to be optimal in certain scenarios.
arXiv Detail & Related papers (2023-11-28T19:01:09Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - On the Computation of Meaning, Language Models and Incomprehensible Horrors [0.0]
We integrate foundational theories of meaning with a mathematical formalism of artificial general intelligence (AGI)
Our findings shed light on the relationship between meaning and intelligence, and how we can build machines that comprehend and intend meaning.
arXiv Detail & Related papers (2023-04-25T09:41:00Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Philosophical Specification of Empathetic Ethical Artificial
Intelligence [0.0]
An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, and infer intent.
We use enactivism, semiotics, perceptual symbol systems and symbol emergence to specify an agent.
It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal.
arXiv Detail & Related papers (2021-07-22T14:37:46Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.