Towards a Formal Theory of the Need for Competence via Computational Intrinsic Motivation
- URL: http://arxiv.org/abs/2502.07423v1
- Date: Tue, 11 Feb 2025 10:03:40 GMT
- Title: Towards a Formal Theory of the Need for Competence via Computational Intrinsic Motivation
- Authors: Erik M. Lintunen, Nadia M. Ady, Sebastian Deterding, Christian Guckelsberger,
- Abstract summary: We focus on the "need for competence", postulated as a key basic human need within Self-Determination Theory (SDT)
We propose that these inconsistencies may be alleviated by drawing on computational models from the field of reinforcement learning (RL)
Our work can support a cycle of theory development by inspiring new computational models formalising aspects of the theory, which can then be tested empirically to refine the theory.
- Score: 6.593505830504729
- License:
- Abstract: Computational models offer powerful tools for formalising psychological theories, making them both testable and applicable in digital contexts. However, they remain little used in the study of motivation within psychology. We focus on the "need for competence", postulated as a key basic human need within Self-Determination Theory (SDT) -- arguably the most influential psychological framework for studying intrinsic motivation (IM). The need for competence is treated as a single construct across SDT texts. Yet, recent research has identified multiple, ambiguously defined facets of competence in SDT. We propose that these inconsistencies may be alleviated by drawing on computational models from the field of artificial intelligence, specifically from the domain of reinforcement learning (RL). By aligning the aforementioned facets of competence -- effectance, skill use, task performance, and capacity growth -- with existing RL formalisms, we provide a foundation for advancing competence-related theory in SDT and motivational psychology more broadly. The formalisms reveal underlying preconditions that SDT fails to make explicit, demonstrating how computational models can improve our understanding of IM. Additionally, our work can support a cycle of theory development by inspiring new computational models formalising aspects of the theory, which can then be tested empirically to refine the theory. While our research lays a promising foundation, empirical studies of these models in both humans and machines are needed, inviting collaboration across disciplines.
Related papers
- The potential -- and the pitfalls -- of using pre-trained language models as cognitive science theories [2.6549754445378344]
We discuss challenges to the use of PLMs as cognitive science theories.
We review assumptions used by researchers to map measures of PLM performance to measures of human performance.
We end by enumerating criteria for using PLMs as credible accounts of cognition and cognitive development.
arXiv Detail & Related papers (2025-01-22T05:24:23Z) - Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - ConcEPT: Concept-Enhanced Pre-Training for Language Models [57.778895980999124]
ConcEPT aims to infuse conceptual knowledge into pre-trained language models.
It exploits external entity concept prediction to predict the concepts of entities mentioned in the pre-training contexts.
Results of experiments show that ConcEPT gains improved conceptual knowledge with concept-enhanced pre-training.
arXiv Detail & Related papers (2024-01-11T05:05:01Z) - A Survey of Reasoning with Foundation Models [235.7288855108172]
Reasoning plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation.
We introduce seminal foundation models proposed or adaptable for reasoning.
We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models.
arXiv Detail & Related papers (2023-12-17T15:16:13Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Decomposed Inductive Procedure Learning [2.421459418045937]
We formalize a theory of Decomposed Inductive Procedure Learning (DIPL)
DIPL outlines how different forms of inductive symbolic learning can be used to build agents that learn educationally relevant tasks.
We demonstrate that DIPL enables the creation of agents that exhibit human-like learning performance.
arXiv Detail & Related papers (2021-10-25T19:36:03Z) - Modelling Behaviour Change using Cognitive Agent Simulations [0.0]
This paper presents work-in-progress research to apply selected behaviour change theories to simulated agents.
The research is focusing on complex agent architectures required for self-determined goal achievement in adverse circumstances.
arXiv Detail & Related papers (2021-10-16T19:19:08Z) - Local Explanations via Necessity and Sufficiency: Unifying Theory and
Practice [3.8902657229395907]
Necessity and sufficiency are the building blocks of all successful explanations.
Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence.
We establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework.
arXiv Detail & Related papers (2021-03-27T01:58:53Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.