Existence and perception as the basis of AGI (Artificial General
Intelligence)
- URL: http://arxiv.org/abs/2202.03155v1
- Date: Sun, 30 Jan 2022 14:06:43 GMT
- Title: Existence and perception as the basis of AGI (Artificial General
Intelligence)
- Authors: Victor V. Senkevich
- Abstract summary: AGI, unlike AI, should operate with meanings. And that's what distinguishes it from AI.
For AGI, which emulates human thinking, this ability is crucial.
Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not strict and formalized, so they cannot be programmed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As is known, AGI (Artificial General Intelligence), unlike AI, should operate
with meanings. And that's what distinguishes it from AI. Any successful AI
implementations (playing chess, unmanned driving, face recognition etc.) do not
operate with the meanings of the processed objects in any way and do not
recognize the meaning. And they don't need to. But for AGI, which emulates
human thinking, this ability is crucial. Numerous attempts to define the
concept of "meaning" have one very significant drawback - all such definitions
are not strict and formalized, so they cannot be programmed. The meaning search
procedure should use a formalized description of its existence and possible
forms of its perception. For the practical implementation of AGI, it is
necessary to develop such "ready-to-code" descriptions in the context of their
use for processing the related cognitive concepts of "meaning" and "knowledge".
An attempt to formalize the definition of such concepts is made in this
article.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - The Reasons that Agents Act: Intention and Instrumental Goals [24.607124467778036]
There is no universally accepted theory of intention applicable to AI agents.
We operationalise the intention with which an agent acts, relating to the reasons it chooses its decision.
Our definition captures the intuitive notion of intent and satisfies desiderata set-out by past work.
arXiv Detail & Related papers (2024-02-11T14:39:40Z) - Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse [1.9506923346234724]
We argue that the meaning of human-level AI or artificial general intelligence (AGI) remains elusive and contested.
We provide a taxonomy of AGI definitions, laying the ground for examining the key social, political, and ethical assumptions they make.
We propose contextual, democratic, and participatory paths to imagining future forms of machine intelligence.
arXiv Detail & Related papers (2024-01-23T23:37:51Z) - Honesty Is the Best Policy: Defining and Mitigating AI Deception [26.267047631872366]
We focus on the problem that agents might deceive in order to achieve their goals.
We introduce a formal definition of deception in structural causal games.
We show, experimentally, that these results can be used to mitigate deception in reinforcement learning agents and language models.
arXiv Detail & Related papers (2023-12-03T11:11:57Z) - Concepts is All You Need: A More Direct Path to AGI [0.0]
Little progress has been made toward AGI (Artificial General Intelligence) since the term was coined some 20 years ago.
Here we outline an architecture and development plan, together with some preliminary results, that offers a much more direct path to full Human-Level AI (HLAI)/ AGI.
arXiv Detail & Related papers (2023-09-04T14:14:41Z) - HAKE: A Knowledge Engine Foundation for Human Activity Understanding [65.24064718649046]
Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.
We propose a novel paradigm to reformulate this task in two stages: first mapping pixels to an intermediate space spanned by atomic activity primitives, then programming detected primitives with interpretable logic rules to infer semantics.
Our framework, the Human Activity Knowledge Engine (HAKE), exhibits superior generalization ability and performance upon challenging benchmarks.
arXiv Detail & Related papers (2022-02-14T16:38:31Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Philosophical Specification of Empathetic Ethical Artificial
Intelligence [0.0]
An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, and infer intent.
We use enactivism, semiotics, perceptual symbol systems and symbol emergence to specify an agent.
It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal.
arXiv Detail & Related papers (2021-07-22T14:37:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.