A challenge in A(G)I, cybernetics revived in the Ouroboros Model as one
algorithm for all thinking
- URL: http://arxiv.org/abs/2403.04292v1
- Date: Thu, 7 Mar 2024 07:39:54 GMT
- Title: A challenge in A(G)I, cybernetics revived in the Ouroboros Model as one
algorithm for all thinking
- Authors: Knud Thomsen
- Abstract summary: The aim of the paper is to highlight strengths and deficiencies of current Artificial Intelligence approaches.
It is proposed to take a wide step back and to newly incorporate aspects of cybernetics and analog control processes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A topical challenge for algorithms in general and for automatic image
categorization and generation in particular is presented in the form of a
drawing for AI to understand. In a second vein, AI is challenged to produce
something similar from verbal description. The aim of the paper is to highlight
strengths and deficiencies of current Artificial Intelligence approaches while
coarsely sketching a way forward. A general lack of encompassing
symbol-embedding and (not only) -grounding in some bodily basis is made
responsible for current deficiencies. A concomitant dearth of hierarchical
organization of concepts follows suite. As a remedy for these shortcomings, it
is proposed to take a wide step back and to newly incorporate aspects of
cybernetics and analog control processes. It is claimed that a promising
overarching perspective is provided by the Ouroboros Model with a valid and
versatile algorithmic backbone for general cognition at all accessible levels
of abstraction and capabilities. Reality, rules, truth, and Free Will are all
useful abstractions according to the Ouroboros Model. Logic deduction as well
as intuitive guesses are claimed as produced on the basis of one
compartmentalized memory for schemata and a pattern-matching, i.e., monitoring
process termed consumption analysis. The latter directs attention on short
(attention proper) and also on long times scales (emotional biases). In this
cybernetic approach, discrepancies between expectations and actual activations
(e.g., sensory precepts) drive the general process of cognition and at the same
time steer the storage of new and adapted memory entries. Dedicated structures
in the human brain work in concert according to this scheme.
Related papers
- Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Behave-XAI: Deep Explainable Learning of Behavioral Representational Data [0.0]
We use explainable or human understandable AI for a behavioral mining scenario.
We first formulate the behavioral mining problem in deep convolutional neural network architecture.
Once the model is developed, explanations are presented in front of users.
arXiv Detail & Related papers (2022-12-30T18:08:48Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - AIGenC: An AI generalisation model via creativity [1.933681537640272]
Inspired by cognitive theories of creativity, this paper introduces a computational model (AIGenC)
It lays down the necessary components to enable artificial agents to learn, use and generate transferable representations.
We discuss the model's capability to yield better out-of-distribution generalisation in artificial agents.
arXiv Detail & Related papers (2022-05-19T17:43:31Z) - Expressive Explanations of DNNs by Combining Concept Analysis with ILP [0.3867363075280543]
We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN)
We show that our explanation is faithful to the original black-box model.
arXiv Detail & Related papers (2021-05-16T07:00:27Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.