Semantic knowledge guides innovation and drives cultural evolution
- URL: http://arxiv.org/abs/2510.12837v2
- Date: Fri, 24 Oct 2025 12:35:55 GMT
- Title: Semantic knowledge guides innovation and drives cultural evolution
- Authors: Anil Yaman, Shen Tian, Björn Lindström,
- Abstract summary: Cultural evolution allows ideas and technology to build over generations, a process reaching its most complex and open-ended form in humans.<n>While social learning enables the transmission of such innovations, the cognitive processes that generate innovations remain unclear.<n>We propose that semantic knowledge-the associations linking concepts to their properties and functions-guides human innovation and drives cumulative culture.
- Score: 0.9176056742068814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cultural evolution allows ideas and technology to build over generations, a process reaching its most complex and open-ended form in humans. While social learning enables the transmission of such innovations, the cognitive processes that generate innovations remain unclear. We propose that semantic knowledge-the associations linking concepts to their properties and functions-guides human innovation and drives cumulative culture. To test this, we combined an agent-based model, which examines how semantic knowledge shapes cultural evolutionary dynamics, with a large-scale behavioural experiment (N = 1,243) testing its role in human innovation. Semantic knowledge directed exploration toward meaningful solutions and interacted synergistically with social learning to amplify innovation and cultural evolution. Participants lacking access to semantic knowledge performed no better than chance, even when social information was available, and relied on shallow exploration strategies for innovation. Together, these findings indicate that semantic knowledge is a key cognitive process enabling human cumulative culture.
Related papers
- The Imperfect Learner: Incorporating Developmental Trajectories in Memory-based Student Simulation [55.722188569369656]
This paper introduces a novel framework for memory-based student simulation.<n>It incorporates developmental trajectories through a hierarchical memory mechanism with structured knowledge representation.<n>In practice, we implement a curriculum-aligned simulator grounded on the Next Generation Science Standards.
arXiv Detail & Related papers (2025-11-08T08:05:43Z) - Experimental Evidence for the Propagation and Preservation of Machine Discoveries in Human Populations [0.6712949342699673]
Intelligent machines with superhuman capabilities have the potential to uncover problem-solving strategies beyond human discovery.<n>We identify three key conditions for machines to fundamentally influence human problem-solving.<n>We demonstrate that when these conditions are met, machine-discovered strategies can be transmitted, understood, and preserved by human populations.
arXiv Detail & Related papers (2025-06-21T15:38:26Z) - Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study [50.065744358362345]
Large language models (LLMs) have shown impressive capabilities across tasks such as mathematics, coding, and reasoning.<n>Yet their learning ability, which is crucial for adapting to dynamic environments and acquiring new knowledge, remains underexplored.
arXiv Detail & Related papers (2025-06-16T13:24:50Z) - Truly Self-Improving Agents Require Intrinsic Metacognitive Learning [59.60803539959191]
Self-improving agents aim to continuously acquire new capabilities with minimal supervision.<n>Current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities.<n>We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent's intrinsic ability to actively evaluate, reflect on, and adapt its own learning processes.
arXiv Detail & Related papers (2025-06-05T14:53:35Z) - Subjective Perspectives within Learned Representations Predict High-Impact Innovation [5.849186636495808]
We show that measured subjective perspectives predict which ideas individuals and groups will creatively attend to and successfully combine in the future.<n>We analyze a natural experiment and simulate creative collaborations between AI agents designed with various perspective and background diversity.
arXiv Detail & Related papers (2025-06-05T04:18:53Z) - How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training [92.88889953768455]
Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge.<n>We identify computational subgraphs that facilitate knowledge storage and processing.
arXiv Detail & Related papers (2025-02-16T16:55:43Z) - Collective Innovation in Groups of Large Language Models [28.486116730339972]
We study Large Language Models (LLMs) that play Little Alchemy 2, a creative video game originally developed for humans.
We study groups of LLMs that share information related to their behaviour and focus on the effect of social connectivity on collective performance.
Our work reveals opportunities and challenges for future studies of collective innovation that are becoming increasingly relevant as Generative Artificial Intelligence algorithms and humans innovate alongside each other.
arXiv Detail & Related papers (2024-07-07T13:59:46Z) - Artificial Generational Intelligence: Cultural Accumulation in Reinforcement Learning [5.930456214333413]
We show that training setups which balance social learning with independent learning give rise to cultural accumulation.
In-context and in-weights cultural accumulation can be interpreted as analogous to knowledge and skill accumulation, respectively.
This work is the first to present general models that achieve emergent cultural accumulation in reinforcement learning.
arXiv Detail & Related papers (2024-06-01T10:33:32Z) - Cultural evolution in populations of Large Language Models [15.012901178522874]
We propose that leveraging the capacity of Large Language Models to mimic human behavior may be fruitful to address this gap.
As artificial agents are bound to participate more and more to the evolution of culture, it is crucial to better understand the dynamics of machine-generated cultural evolution.
We present a framework for simulating cultural evolution in populations of LLMs, allowing the manipulation of variables known to be important in cultural evolution.
arXiv Detail & Related papers (2024-03-13T18:11:17Z) - Towards Automated Knowledge Integration From Human-Interpretable Representations [55.2480439325792]
We introduce and motivate theoretically the principles of informed meta-learning enabling automated and controllable inductive bias selection.<n>We empirically demonstrate the potential benefits and limitations of informed meta-learning in improving data efficiency and generalisation.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Vygotskian Autotelic Artificial Intelligence: Language and Culture
Internalization for Human-Like AI [16.487953861478054]
This perspective paper proposes a new AI paradigm in the quest for artificial lifelong skill discovery.
We focus on language especially, and how its structure and content may support the development of new cognitive functions in artificial agents.
It justifies the approach by uncovering examples of new artificial cognitive functions emerging from interactions between language and embodiment.
arXiv Detail & Related papers (2022-06-02T16:35:41Z) - Learning Robust Real-Time Cultural Transmission without Human Data [82.05222093231566]
We provide a method for generating zero-shot, high recall cultural transmission in artificially intelligent agents.
Our agents succeed at real-time cultural transmission from humans in novel contexts without using any pre-collected human data.
This paves the way for cultural evolution as an algorithm for developing artificial general intelligence.
arXiv Detail & Related papers (2022-03-01T19:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.