A Categorical Framework of General Intelligence
- URL: http://arxiv.org/abs/2303.04571v2
- Date: Wed, 3 May 2023 12:50:14 GMT
- Title: A Categorical Framework of General Intelligence
- Authors: Yang Yuan
- Abstract summary: Since Alan Turing asked this question in 1950, nobody is able to give a direct answer.
We introduce a categorical framework towards this goal, with two main results.
- Score: 12.134564449202708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Can machines think? Since Alan Turing asked this question in 1950, nobody is
able to give a direct answer, due to the lack of solid mathematical foundations
for general intelligence. In this paper, we introduce a categorical framework
towards this goal, with two main results. First, we investigate object
representation through presheaves, introducing the notion of self-state
awareness as a categorical analogue to self-consciousness, along with
corresponding algorithms for its enforcement and evaluation. Secondly, we
extend object representation to scenario representation using diagrams and
limits, which then become building blocks for mathematical modeling,
interpretability and AI safety. As an ancillary result, our framework
introduces various categorical invariance properties that can serve as the
alignment signals for model training.
Related papers
- Explainable Moral Values: a neuro-symbolic approach to value classification [1.4186974630564675]
This work explores the integration of ontology-based reasoning and Machine Learning techniques for explainable value classification.
By relying on an ontological formalization of moral values as in the Moral Foundations Theory, the textitsandra neuro-symbolic reasoner is used to infer values that are emphsatisfied by a certain sentence.
We show that only relying on the reasoner's inference results in explainable classification comparable to other more complex approaches.
arXiv Detail & Related papers (2024-10-16T14:53:13Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - On the Power of Foundation Models [9.132197704350492]
We show that category theory provides powerful machinery to answer this question.
A foundation model with the minimum required power (up to symmetry) can theoretically solve downstream tasks for the category defined by pretext task.
Our final result can be seen as a new type of generalization theorem, showing that the foundation model can generate unseen objects from the target category.
arXiv Detail & Related papers (2022-11-29T16:10:11Z) - What does it mean to represent? Mental representations as falsifiable
memory patterns [8.430851504111585]
We argue that causal and teleological approaches fail to provide a satisfactory account of representation.
We sketch an alternative according to which representations correspond to inferred latent structures in the world.
These structures are assumed to have certain properties objectively, which allows for planning, prediction, and detection of unexpected events.
arXiv Detail & Related papers (2022-03-06T12:52:42Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Quantifying Learnability and Describability of Visual Concepts Emerging
in Representation Learning [91.58529629419135]
We consider how to characterise visual groupings discovered automatically by deep neural networks.
We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings.
arXiv Detail & Related papers (2020-10-27T18:41:49Z) - Relational reasoning and generalization using non-symbolic neural
networks [66.07793171648161]
Previous work suggested that neural networks were not suitable models of human relational reasoning because they could not represent mathematically identity, the most basic form of equality.
We find neural networks are able to learn basic equality (mathematical identity), (2) sequential equality problems (learning ABA-patterned sequences) with only positive training instances, and (3) a complex, hierarchical equality problem with only basic equality training instances.
These results suggest that essential aspects of symbolic reasoning can emerge from data-driven, non-symbolic learning processes.
arXiv Detail & Related papers (2020-06-14T18:25:42Z) - A Comparison of Self-Play Algorithms Under a Generalized Framework [4.339542790745868]
The notion of self-play, albeit often cited in multiagent Reinforcement Learning, has never been grounded in a formal model.
We present a formalized framework, with clearly defined assumptions, which encapsulates the meaning of self-play.
We measure how well a subset of the captured self-play methods approximate this solution when paired with the famous PPO algorithm.
arXiv Detail & Related papers (2020-06-08T11:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.