A Framework of Defining, Modeling, and Analyzing Cognition Mechanisms
- URL: http://arxiv.org/abs/2311.10104v1
- Date: Mon, 13 Nov 2023 12:31:46 GMT
- Title: A Framework of Defining, Modeling, and Analyzing Cognition Mechanisms
- Authors: Amir Fayezioghani
- Abstract summary: I propose a framework of defining, modeling, and analyzing cognition mechanisms.
I argue that the cognition base has the features of the cognition self of humans.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cognition is a core part of and a common topic among philosophy of mind,
psychology, neuroscience, AI, and cognitive science. Through a mechanistic
lens, I propose a framework of defining, modeling, and analyzing cognition
mechanisms. Firstly, appropriate terms are introduced and used in explanations
related to the framework and within the definition of a mechanism. I implicitly
contend that this terminology essentially characterizes a conceptual world
required for discussions in this paper. Secondly, a mathematical model of a
mechanism based on directed graphs is proposed. Thirdly, the definition of a
base necessary for a mechanism to be classified as a cognition mechanism is
proposed. I argue that the cognition base has the features of the cognition
self of humans. Fourthly, three ways to mechanistically look at mechanisms is
defined and specific instances of them are suggested. Fifthly, standards for
visualization and presentation of mechanisms, cognition mechanisms, and the
instances to mechanistically look at them are suggested and used to analyze
cognition mechanisms through appropriate examples. Finally, the features of
this paper are discussed and prospects of further development of the proposed
framework are briefly expressed.
Related papers
- Exploring Conceptual Modeling Metaphysics: Existence Containers, Leibniz's Monads and Avicenna's Essence [0.0]
Requirement specifications in software engineering involve developing a conceptual model of a target domain.
Much metaphysical work might best be understood as a model-building process.
The focus is on thimacs as a single category of TM modeling in the context of a two-phase world of staticity and dynamics.
arXiv Detail & Related papers (2024-02-20T22:25:20Z) - Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals [82.68757839524677]
Interpretability research aims to bridge the gap between empirical success and our scientific understanding of large language models (LLMs)
We propose a formulation of competition of mechanisms, which focuses on the interplay of multiple mechanisms instead of individual mechanisms.
Our findings show traces of the mechanisms and their competition across various model components and reveal attention positions that effectively control the strength of certain mechanisms.
arXiv Detail & Related papers (2024-02-18T17:26:51Z) - Binding Dynamics in Rotating Features [72.80071820194273]
We propose an alternative "cosine binding" mechanism, which explicitly computes the alignment between features and adjusts weights accordingly.
This allows us to draw direct connections to self-attention and biological neural processes, and to shed light on the fundamental dynamics for object-centric representations to emerge in Rotating Features.
arXiv Detail & Related papers (2024-02-08T12:31:08Z) - On Computational Mechanisms for Shared Intentionality, and Speculation
on Rationality and Consciousness [0.0]
A singular attribute of humankind is our ability to undertake novel, cooperative behavior, or teamwork.
This requires that we can communicate goals, plans, and ideas between the brains of individuals to create shared intentionality.
I derive necessary characteristics of basic mechanisms to enable shared intentionality between prelinguistic computational agents.
arXiv Detail & Related papers (2023-06-03T21:31:38Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Causal Abstraction: A Theoretical Foundation for Mechanistic Interpretability [30.76910454663951]
Causal abstraction provides a theoretical foundation for mechanistic interpretability.
Our contributions are generalizing the theory of causal abstraction from mechanism replacement to arbitrary mechanism transformation.
arXiv Detail & Related papers (2023-01-11T20:42:41Z) - Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI [2.7920304852537536]
This paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation.
Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions.
arXiv Detail & Related papers (2022-05-03T22:31:42Z) - Properties from Mechanisms: An Equivariance Perspective on Identifiable
Representation Learning [79.4957965474334]
Key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties.
This paper asks, "Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?"
We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms.
arXiv Detail & Related papers (2021-10-29T14:04:08Z) - Quantum realism: axiomatization and quantification [77.34726150561087]
We build an axiomatization for quantum realism -- a notion of realism compatible with quantum theory.
We explicitly construct some classes of entropic quantifiers that are shown to satisfy almost all of the proposed axioms.
arXiv Detail & Related papers (2021-10-10T18:08:42Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Expressiveness and machine processability of Knowledge Organization
Systems (KOS): An analysis of concepts and relations [0.0]
The potential of both the expressiveness and machine processability of each Knowledge Organization System is extensively regulated by its structural rules.
Ontologies explicitly define diverse types of relations, and are by their nature machine-processable.
arXiv Detail & Related papers (2020-03-11T12:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.