How language models extrapolate outside the training data: A case study in Textualized Gridworld
- URL: http://arxiv.org/abs/2406.15275v4
- Date: Thu, 05 Dec 2024 21:50:25 GMT
- Title: How language models extrapolate outside the training data: A case study in Textualized Gridworld
- Authors: Doyoung Kim, Jongwon Lee, Jinho Park, Minjoon Seo,
- Abstract summary: We show that conventional approaches, including next token prediction and Chain of Thought finetuning, fail to extrapolate in larger, unseen environments.
We propose cognitive maps for path planning, a novel CoT framework that simulates humanlike mental representations.
Our finding that these cognitive maps require specialized training schemes opens up important questions about developing general-purpose cognitive maps in language models.
- Score: 32.5268320198854
- License:
- Abstract: Language models' ability to extrapolate learned behaviors to novel, more complex environments beyond their training scope is highly unknown. This study introduces a path planning task in a textualized Gridworld to probe language models' extrapolation capabilities. We show that conventional approaches, including next token prediction and Chain of Thought (CoT) finetuning, fail to extrapolate in larger, unseen environments. Inspired by human cognition and dual process theory, we propose cognitive maps for path planning, a novel CoT framework that simulates humanlike mental representations. Our experiments show that cognitive maps not only enhance extrapolation to unseen environments but also exhibit humanlike characteristics through structured mental simulation and rapid adaptation. Our finding that these cognitive maps require specialized training schemes and cannot be induced through simple prompting opens up important questions about developing general-purpose cognitive maps in language models. Our comparison with exploration-based methods further illuminates the complementary strengths of offline planning and online exploration.
Related papers
- Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Exploring Spatial Schema Intuitions in Large Language and Vision Models [8.944921398608063]
We investigate whether large language models (LLMs) effectively capture implicit human intuitions about building blocks of language.
Surprisingly, correlations between model outputs and human responses emerge, revealing adaptability without a tangible connection to embodied experiences.
This research contributes to a nuanced understanding of the interplay between language, spatial experiences, and computations made by large language models.
arXiv Detail & Related papers (2024-02-01T19:25:50Z) - Navigation with Large Language Models: Semantic Guesswork as a Heuristic
for Planning [73.0990339667978]
Navigation in unfamiliar environments presents a major challenge for robots.
We use language models to bias exploration of novel real-world environments.
We evaluate LFG in challenging real-world environments and simulated benchmarks.
arXiv Detail & Related papers (2023-10-16T06:21:06Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - See, Plan, Predict: Language-guided Cognitive Planning with Video
Prediction [27.44435424335596]
We devise a cognitive planning algorithm via language-guided video prediction.
The network is endowed with the ability to ground concepts based on natural language input with generalization to unseen objects.
arXiv Detail & Related papers (2022-10-07T21:27:16Z) - Imagination-Augmented Natural Language Understanding [71.51687221130925]
We introduce an Imagination-Augmented Cross-modal (iACE) to solve natural language understanding tasks.
iACE enables visual imagination with external knowledge transferred from the powerful generative and pre-trained vision-and-language models.
Experiments on GLUE and SWAG show that iACE achieves consistent improvement over visually-supervised pre-trained models.
arXiv Detail & Related papers (2022-04-18T19:39:36Z) - Extrapolation Frameworks in Cognitive Psychology Suitable for Study of
Image Classification Models [0.0]
In contrast to the deep learning literature, in cognitive science, psychology, and neuroscience, extrapolation and learning are often studied in tandem.
We propose a novel extrapolation framework for the mathematical study of deep learning models.
arXiv Detail & Related papers (2021-12-06T23:06:31Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Zero-Shot Compositional Policy Learning via Language Grounding [13.45138913186308]
Humans can adapt to new tasks quickly by leveraging prior knowledge about the world such as language descriptions.
We introduce a new research platform BabyAI++ in which the dynamics of environments are disentangled from visual appearance.
We find that current language-guided RL/IL techniques overfit to the training environments and suffer from a huge performance drop when facing unseen combinations.
arXiv Detail & Related papers (2020-04-15T16:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.