AI Centered on Scene Fitting and Dynamic Cognitive Network
- URL: http://arxiv.org/abs/2010.04551v1
- Date: Fri, 2 Oct 2020 06:13:41 GMT
- Title: AI Centered on Scene Fitting and Dynamic Cognitive Network
- Authors: Feng Chen
- Abstract summary: This paper briefly analyzes the advantages and problems of AI mainstream technology and puts forward: To achieve stronger Artificial Intelligence, the end-to-end function calculation must be changed.
It also discusses the concrete scheme named Dynamic Cognitive Network model (DC Net)
- Score: 4.228224431041357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper briefly analyzes the advantages and problems of AI mainstream
technology and puts forward: To achieve stronger Artificial Intelligence, the
end-to-end function calculation must be changed and adopt the technology system
centered on scene fitting. It also discusses the concrete scheme named Dynamic
Cognitive Network model (DC Net). Discussions : The knowledge and data in the
comprehensive domain are uniformly represented by using the rich connection
heterogeneous Dynamic Cognitive Network constructed by conceptualized elements;
A network structure of two dimensions and multi layers is designed to achieve
unified implementation of AI core processing such as combination and
generalization; This paper analyzes the implementation differences of computer
systems in different scenes, such as open domain, closed domain, significant
probability and non-significant probability, and points out that the
implementation in open domain and significant probability scene is the key of
AI, and a cognitive probability model combining bidirectional conditional
probability, probability passing and superposition, probability col-lapse is
designed; An omnidirectional network matching-growth algorithm system driven by
target and probability is designed to realize the integration of parsing,
generating, reasoning, querying, learning and so on; The principle of cognitive
network optimization is proposed, and the basic framework of Cognitive Network
Learning algorithm (CNL) is designed that structure learning is the primary
method and parameter learning is the auxiliary. The logical similarity of
implementation between DC Net model and human intelligence is analyzed in this
paper.
Related papers
- NAR-*ICP: Neural Execution of Classical ICP-based Pointcloud Registration Algorithms [7.542220697870245]
This study explores the intersection of neural networks and classical robotics algorithms through the Neural Algorithmic Reasoning framework.
We propose a Graph Neural Network (GNN)-based learning framework, NAR-*ICP, which learns the intermediate algorithmic steps of classical ICP-based pointcloud registration algorithms.
We evaluate our approach across diverse datasets, from real-world to synthetic, demonstrating its flexibility in handling complex and noisy inputs.
arXiv Detail & Related papers (2024-10-14T19:33:46Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Learning and Compositionality: a Unification Attempt via Connectionist
Probabilistic Programming [11.06543250284755]
We consider learning and compositionality as the key mechanisms towards simulating human-like intelligence.
We propose Connectionist Probabilistic Program ( CPP), a framework that connects connectionist structures (for learning) and probabilistic program semantics (for compositionality)
arXiv Detail & Related papers (2022-08-26T17:20:58Z) - The Neural Race Reduction: Dynamics of Abstraction in Gated Networks [12.130628846129973]
We introduce the Gated Deep Linear Network framework that schematizes how pathways of information flow impact learning dynamics.
We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning.
Our work gives rise to general hypotheses relating neural architecture to learning and provides a mathematical approach towards understanding the design of more complex architectures.
arXiv Detail & Related papers (2022-07-21T12:01:03Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Investigating Bi-Level Optimization for Learning and Vision from a
Unified Perspective: A Survey and Beyond [114.39616146985001]
In machine learning and computer vision fields, despite the different motivations and mechanisms, a lot of complex problems contain a series of closely related subproblms.
In this paper, we first uniformly express these complex learning and vision problems from the perspective of Bi-Level Optimization (BLO)
Then we construct a value-function-based single-level reformulation and establish a unified algorithmic framework to understand and formulate mainstream gradient-based BLO methodologies.
arXiv Detail & Related papers (2021-01-27T16:20:23Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.