Leveraging the Inductive Bias of Large Language Models for Abstract
Textual Reasoning
- URL: http://arxiv.org/abs/2110.02370v1
- Date: Tue, 5 Oct 2021 21:40:46 GMT
- Title: Leveraging the Inductive Bias of Large Language Models for Abstract
Textual Reasoning
- Authors: Christopher Michael Rytting, David Wingate
- Abstract summary: Large natural language models (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks.
We show that the knowledge embedded in such models provides a useful inductive bias, not just on traditional NLP tasks, but also in the nontraditional task of training a symbolic reasoning engine.
- Score: 3.616948583169635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large natural language models (such as GPT-3 or T5) demonstrate impressive
abilities across a range of general NLP tasks. Here, we show that the knowledge
embedded in such models provides a useful inductive bias, not just on
traditional NLP tasks, but also in the nontraditional task of training a
symbolic reasoning engine. We observe that these engines learn quickly and
generalize in a natural way that reflects human intuition. For example,
training such a system to model block-stacking might naturally generalize to
stacking other types of objects because of structure in the real world that has
been partially captured by the language describing it. We study several
abstract textual reasoning tasks, such as object manipulation and navigation,
and demonstrate multiple types of generalization to novel scenarios and the
symbols that comprise them. We also demonstrate the surprising utility of
\textit{compositional learning}, where a learner dedicated to mastering a
complicated task gains an advantage by training on relevant simpler tasks
instead of jumping straight to the complicated task.
Related papers
- Learning with Language-Guided State Abstractions [58.199148890064826]
Generalizable policy learning in high-dimensional observation spaces is facilitated by well-designed state representations.
Our method, LGA, uses a combination of natural language supervision and background knowledge from language models to automatically build state representations tailored to unseen tasks.
Experiments on simulated robotic tasks show that LGA yields state abstractions similar to those designed by humans, but in a fraction of the time.
arXiv Detail & Related papers (2024-02-28T23:57:04Z) - Deep Natural Language Feature Learning for Interpretable Prediction [1.6114012813668932]
We propose a method to break down a main complex task into a set of intermediary easier sub-tasks.
Our method allows for representing each example by a vector consisting of the answers to these questions.
We have successfully applied this method to two completely different tasks: detecting incoherence in students' answers to open-ended mathematics exam questions, and screening abstracts for a systematic literature review of scientific papers on climate change and agroecology.
arXiv Detail & Related papers (2023-11-09T21:43:27Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Learning Symbolic Rules over Abstract Meaning Representations for
Textual Reinforcement Learning [63.148199057487226]
We propose a modular, NEuroSymbolic Textual Agent (NESTA) that combines a generic semantic generalization with a rule induction system to learn interpretable rules as policies.
Our experiments show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better to unseen test games and learning from fewer training interactions.
arXiv Detail & Related papers (2023-07-05T23:21:05Z) - Language Models Implement Simple Word2Vec-style Vector Arithmetic [32.2976613483151]
A primary criticism towards language models (LMs) is their inscrutability.
This paper presents evidence that, despite their size and complexity, LMs sometimes exploit a simple vector arithmetic style mechanism to solve some relational tasks.
arXiv Detail & Related papers (2023-05-25T15:04:01Z) - Pre-Training to Learn in Context [138.0745138788142]
The ability of in-context learning is not fully exploited because language models are not explicitly trained to learn in context.
We propose PICL (Pre-training for In-Context Learning), a framework to enhance the language models' in-context learning ability.
Our experiments show that PICL is more effective and task-generalizable than a range of baselines, outperforming larger language models with nearly 4x parameters.
arXiv Detail & Related papers (2023-05-16T03:38:06Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Target Languages (vs. Inductive Biases) for Learning to Act and Plan [13.820550902006078]
I articulate a different learning approach where representations do not emerge from biases in a neural architecture but are learned over a given target language with a known semantics.
The goals of the paper and talk are to make these ideas explicit, to place them in a broader context where the design of the target language is crucial, and to illustrate them in the context of learning to act and plan.
arXiv Detail & Related papers (2021-09-15T10:24:13Z) - Ask Your Humans: Using Human Instructions to Improve Generalization in
Reinforcement Learning [32.82030512053361]
We propose the use of step-by-step human demonstrations in the form of natural language instructions and action trajectories.
We find that human demonstrations help solve the most complex tasks.
We also find that incorporating natural language allows the model to generalize to unseen tasks in a zero-shot setting.
arXiv Detail & Related papers (2020-11-01T14:39:46Z) - Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge [96.92252296244233]
Large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control.
We show that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
arXiv Detail & Related papers (2020-06-11T17:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.