Machine Common Sense
- URL: http://arxiv.org/abs/2006.08409v1
- Date: Mon, 15 Jun 2020 13:59:47 GMT
- Title: Machine Common Sense
- Authors: Alexander Gavrilenko, Katerina Morozova
- Abstract summary: Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine common sense remains a broad, potentially unbounded problem in
artificial intelligence (AI). There is a wide range of strategies that can be
employed to make progress on this challenge. This article deals with the
aspects of modeling commonsense reasoning focusing on such domain as
interpersonal interactions. The basic idea is that there are several types of
commonsense reasoning: one is manifested at the logical level of physical
actions, the other deals with the understanding of the essence of human-human
interactions. Existing approaches, based on formal logic and artificial neural
networks, allow for modeling only the first type of common sense. To model the
second type, it is vital to understand the motives and rules of human behavior.
This model is based on real-life heuristics, i.e., the rules of thumb,
developed through knowledge and experience of different generations. Such
knowledge base allows for development of an expert system with inference and
explanatory mechanisms (commonsense reasoning algorithms and personal models).
Algorithms provide tools for a situation analysis, while personal models make
it possible to identify personality traits. The system so designed should
perform the function of amplified intelligence for interactions, including
human-machine.
Related papers
- Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI [20.21053807133341]
We try to provide an account of what constitutes a human-aware AI system.
We see that human-aware AI is a design oriented paradigm, one that focuses on the need for modeling the humans it may interact with.
arXiv Detail & Related papers (2024-05-13T14:17:52Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - On Computational Mechanisms for Shared Intentionality, and Speculation
on Rationality and Consciousness [0.0]
A singular attribute of humankind is our ability to undertake novel, cooperative behavior, or teamwork.
This requires that we can communicate goals, plans, and ideas between the brains of individuals to create shared intentionality.
I derive necessary characteristics of basic mechanisms to enable shared intentionality between prelinguistic computational agents.
arXiv Detail & Related papers (2023-06-03T21:31:38Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - How to Answer Why -- Evaluating the Explanations of AI Through Mental
Model Analysis [0.0]
Key question for human-centered AI research is how to validly survey users' mental models.
We evaluate whether mental models are suitable as an empirical research method.
We propose an exemplary method to evaluate explainable AI approaches in a human-centered way.
arXiv Detail & Related papers (2020-01-11T17:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.