Growing knowledge culturally across generations to solve novel, complex
tasks
- URL: http://arxiv.org/abs/2107.13377v1
- Date: Wed, 28 Jul 2021 14:09:40 GMT
- Title: Growing knowledge culturally across generations to solve novel, complex
tasks
- Authors: Michael Henry Tessler, Pedro A. Tsividis, Jason Madeano, Brin Harper,
and Joshua B. Tenenbaum
- Abstract summary: We take a first step towards reverse-engineering cultural learning through language.
We develop a suite of complex high-stakes tasks in the form of minimalist-style video games.
Knowledge accumulated gradually across generations, allowing later generations to advance further in the games.
- Score: 29.579223105173217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge built culturally across generations allows humans to learn far more
than an individual could glean from their own experience in a lifetime.
Cultural knowledge in turn rests on language: language is the richest record of
what previous generations believed, valued, and practiced. The power and
mechanisms of language as a means of cultural learning, however, are not well
understood. We take a first step towards reverse-engineering cultural learning
through language. We developed a suite of complex high-stakes tasks in the form
of minimalist-style video games, which we deployed in an iterated learning
paradigm. Game participants were limited to only two attempts (two lives) to
beat each game and were allowed to write a message to a future participant who
read the message before playing. Knowledge accumulated gradually across
generations, allowing later generations to advance further in the games and
perform more efficient actions. Multigenerational learning followed a
strikingly similar trajectory to individuals learning alone with an unlimited
number of lives. These results suggest that language provides a sufficient
medium to express and accumulate the knowledge people acquire in these diverse
tasks: the dynamics of the environment, valuable goals, dangerous risks, and
strategies for success. The video game paradigm we pioneer here is thus a rich
test bed for theories of cultural transmission and learning from language.
Related papers
- Generative AI, Pragmatics, and Authenticity in Second Language Learning [0.0]
There are obvious benefits to integrating generative AI (artificial intelligence) into language learning and teaching.
However, due to how AI systems under-stand human language, they lack the lived experience to be able to use language with the same social awareness as humans.
There are built-in linguistic and cultural biases based on their training data which is mostly in English and predominantly from Western sources.
arXiv Detail & Related papers (2024-10-18T11:58:03Z) - Artificial Generational Intelligence: Cultural Accumulation in Reinforcement Learning [5.930456214333413]
We show that training setups which balance social learning with independent learning give rise to cultural accumulation.
In-context and in-weights cultural accumulation can be interpreted as analogous to knowledge and skill accumulation, respectively.
This work is the first to present general models that achieve emergent cultural accumulation in reinforcement learning.
arXiv Detail & Related papers (2024-06-01T10:33:32Z) - Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking [48.21982147529661]
This paper introduces a novel approach for massively multicultural knowledge acquisition.
Our method strategically navigates from densely informative Wikipedia documents on cultural topics to an extensive network of linked pages.
Our work marks an important step towards deeper understanding and bridging the gaps of cultural disparities in AI.
arXiv Detail & Related papers (2024-02-14T18:16:54Z) - Cultural Compass: Predicting Transfer Learning Success in Offensive Language Detection with Cultural Features [19.72091739119933]
Our study delves into the intersection of cultural features and transfer learning effectiveness.
Based on these results, we advocate for the integration of cultural information into datasets.
Our research signifies a step forward in the quest for more inclusive, culturally sensitive language technologies.
arXiv Detail & Related papers (2023-10-10T09:29:38Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Learning Robust Real-Time Cultural Transmission without Human Data [82.05222093231566]
We provide a method for generating zero-shot, high recall cultural transmission in artificially intelligent agents.
Our agents succeed at real-time cultural transmission from humans in novel contexts without using any pre-collected human data.
This paves the way for cultural evolution as an algorithm for developing artificial general intelligence.
arXiv Detail & Related papers (2022-03-01T19:32:27Z) - Language Generation with Multi-Hop Reasoning on Commonsense Knowledge
Graph [124.45799297285083]
We argue that exploiting both the structural and semantic information of the knowledge graph facilitates commonsense-aware text generation.
We propose Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph.
arXiv Detail & Related papers (2020-09-24T13:55:32Z) - Experience Grounds Language [185.73483760454454]
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world.
arXiv Detail & Related papers (2020-04-21T16:56:27Z) - Co-evolution of language and agents in referential games [24.708802957946467]
We show that the optimal situation is to take into account the learning biases of the language learners and thus let language and agents co-evolve.
We pave the way to investigate the co-evolution of language in language emergence studies.
arXiv Detail & Related papers (2020-01-10T09:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.