The Function-Representation Model of Computation
- URL: http://arxiv.org/abs/2410.07928v3
- Date: Wed, 6 Nov 2024 17:45:39 GMT
- Title: The Function-Representation Model of Computation
- Authors: Alfredo Ibias, Hector Antona, Guillem Ramirez-Miranda, Enric Guinovart, Eduard Alarcon,
- Abstract summary: We propose a novel model of computation, one where memory and program are merged: the Function-Representation.
This model of computation involves defining a generic Function-Representation and instantiating multiple instances of it.
We also explore the kind of functions a Function-Representation can implement, and present different ways to organise multiple instances of a Function-Representation.
- Score: 2.5069344340760713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cognitive Architectures are the forefront of the research into developing an artificial cognition. However, they approach the problem from a separated memory and program model of computation. This model of computation poses a fundamental problem: the knowledge retrieval heuristic. In this paper we propose to solve this problem by using a novel model of computation, one where memory and program are merged: the Function-Representation. This model of computation involves defining a generic Function-Representation and instantiating multiple instances of it. In this paper we explore the potential of this novel model of computation through mathematical definitions and proofs. We also explore the kind of functions a Function-Representation can implement, and present different ways to organise multiple instances of a Function-Representation.
Related papers
- Solving Recurrence Relations using Machine Learning, with Application to
Cost Analysis [0.0]
We develop a novel, general approach for solving arbitrary, constrained recurrence relations.
Our approach can find closed-form solutions, in a reasonable time, for classes of recurrences that cannot be solved by such a system.
arXiv Detail & Related papers (2023-08-30T08:55:36Z) - Generative Models as a Complex Systems Science: How can we make sense of
large language model behavior? [75.79305790453654]
Coaxing out desired behavior from pretrained models, while avoiding undesirable ones, has redefined NLP.
We argue for a systematic effort to decompose language model behavior into categories that explain cross-task performance.
arXiv Detail & Related papers (2023-07-31T22:58:41Z) - Can Transformers Learn to Solve Problems Recursively? [9.5623664764386]
This paper examines the behavior of neural networks learning algorithms relevant to programs and formal verification.
By reconstructing these algorithms, we are able to correctly predict 91 percent of failure cases for one of the approximated functions.
arXiv Detail & Related papers (2023-05-24T04:08:37Z) - A simple probabilistic neural network for machine understanding [0.0]
We discuss probabilistic neural networks with a fixed internal representation as models for machine understanding.
We derive the internal representation by requiring that it satisfies the principles of maximal relevance and of maximal ignorance about how different features are combined.
We argue that learning machines with this architecture enjoy a number of interesting properties, like the continuity of the representation with respect to changes in parameters and data.
arXiv Detail & Related papers (2022-10-24T13:00:15Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Recall and Learn: A Memory-augmented Solver for Math Word Problems [15.550156292329229]
We propose a novel human-like analogical learning method in a recall and learn manner.
Our proposed framework is composed of modules of memory, representation, analogy, and reasoning.
arXiv Detail & Related papers (2021-09-27T14:59:08Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Genome as a functional program [0.0]
We introduce a model of learning for some class of functional programs.
We consider the approach to Darwinian evolution as a learning problem for functional programming.
arXiv Detail & Related papers (2020-06-05T07:58:50Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.