Collective Innovation in Groups of Large Language Models
- URL: http://arxiv.org/abs/2407.05377v1
- Date: Sun, 7 Jul 2024 13:59:46 GMT
- Title: Collective Innovation in Groups of Large Language Models
- Authors: Eleni Nisioti, Sebastian Risi, Ida Momennejad, Pierre-Yves Oudeyer, Clément Moulin-Frier,
- Abstract summary: We study Large Language Models (LLMs) that play Little Alchemy 2, a creative video game originally developed for humans.
We study groups of LLMs that share information related to their behaviour and focus on the effect of social connectivity on collective performance.
Our work reveals opportunities and challenges for future studies of collective innovation that are becoming increasingly relevant as Generative Artificial Intelligence algorithms and humans innovate alongside each other.
- Score: 28.486116730339972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human culture relies on collective innovation: our ability to continuously explore how existing elements in our environment can be combined to create new ones. Language is hypothesized to play a key role in human culture, driving individual cognitive capacities and shaping communication. Yet the majority of models of collective innovation assign no cognitive capacities or language abilities to agents. Here, we contribute a computational study of collective innovation where agents are Large Language Models (LLMs) that play Little Alchemy 2, a creative video game originally developed for humans that, as we argue, captures useful aspects of innovation landscapes not present in previous test-beds. We, first, study an LLM in isolation and discover that it exhibits both useful skills and crucial limitations. We, then, study groups of LLMs that share information related to their behaviour and focus on the effect of social connectivity on collective performance. In agreement with previous human and computational studies, we observe that groups with dynamic connectivity out-compete fully-connected groups. Our work reveals opportunities and challenges for future studies of collective innovation that are becoming increasingly relevant as Generative Artificial Intelligence algorithms and humans innovate alongside each other.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - The Role of Higher-Order Cognitive Models in Active Learning [8.847360368647752]
We advocate for a new paradigm for active learning for human feedback.
We discuss how increasing level of agency results in qualitatively different forms of rational communication between an active learning system and a teacher.
arXiv Detail & Related papers (2024-01-09T07:39:36Z) - Can AI Be as Creative as Humans? [84.43873277557852]
We prove in theory that AI can be as creative as humans under the condition that it can properly fit the data generated by human creators.
The debate on AI's creativity is reduced into the question of its ability to fit a sufficient amount of data.
arXiv Detail & Related papers (2024-01-03T08:49:12Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - The language and social behavior of innovators [0.0]
We analyze about 38,000 posts available in the intranet forum of a large multinational company.
We find that innovators write more, use a more complex language, introduce new concepts/ideas, and use positive but factual-based language.
arXiv Detail & Related papers (2022-09-20T07:01:25Z) - Team Learning as a Lens for Designing Human-AI Co-Creative Systems [12.24664973838839]
Generative, ML-driven interactive systems have the potential to change how people interact with computers in creative processes.
It is still unclear how we might achieve effective human-AI collaboration in open-ended task domains.
arXiv Detail & Related papers (2022-07-06T22:11:13Z) - Social Network Structure Shapes Innovation: Experience-sharing in RL
with SAPIENS [16.388726429030346]
In dynamic topologies, humans oscillate between innovating individually or in small clusters, and then sharing outcomes with others.
We show that experience sharing within a dynamic topology achieves the highest level of innovation across tasks.
These contributions can advance our understanding of optimal AI-AI, human-human, and human-AI collaborative networks.
arXiv Detail & Related papers (2022-06-10T12:47:45Z) - Vygotskian Autotelic Artificial Intelligence: Language and Culture
Internalization for Human-Like AI [16.487953861478054]
This perspective paper proposes a new AI paradigm in the quest for artificial lifelong skill discovery.
We focus on language especially, and how its structure and content may support the development of new cognitive functions in artificial agents.
It justifies the approach by uncovering examples of new artificial cognitive functions emerging from interactions between language and embodiment.
arXiv Detail & Related papers (2022-06-02T16:35:41Z) - IGLU 2022: Interactive Grounded Language Understanding in a
Collaborative Environment at NeurIPS 2022 [63.07251290802841]
We propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
The primary goal of the competition is to approach the problem of how to develop interactive embodied agents.
This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community.
arXiv Detail & Related papers (2022-05-27T06:12:48Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - NeurIPS 2021 Competition IGLU: Interactive Grounded Language
Understanding in a Collaborative Environment [71.11505407453072]
We propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
The primary goal of the competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment.
This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL)
arXiv Detail & Related papers (2021-10-13T07:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.