Teaching Pretrained Models with Commonsense Reasoning: A Preliminary
KB-Based Approach
- URL: http://arxiv.org/abs/1909.09743v2
- Date: Thu, 16 Feb 2023 08:07:46 GMT
- Title: Teaching Pretrained Models with Commonsense Reasoning: A Preliminary
KB-Based Approach
- Authors: Shiyang Li, Jianshu Chen, Dian Yu
- Abstract summary: We propose a method to teach pretrained models with commonsense reasoning by leveraging the structured knowledge in ConceptNet.
Experimental results demonstrate that, when refined on these training examples, the pretrained models consistently improve their performance on tasks that require commonsense reasoning.
- Score: 24.954288132238293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, pretrained language models (e.g., BERT) have achieved great success
on many downstream natural language understanding tasks and exhibit a certain
level of commonsense reasoning ability. However, their performance on
commonsense tasks is still far from that of humans. As a preliminary attempt,
we propose a simple yet effective method to teach pretrained models with
commonsense reasoning by leveraging the structured knowledge in ConceptNet, the
largest commonsense knowledge base (KB). Specifically, the structured knowledge
in KB allows us to construct various logical forms, and then generate
multiple-choice questions requiring commonsense logical reasoning. Experimental
results demonstrate that, when refined on these training examples, the
pretrained models consistently improve their performance on tasks that require
commonsense reasoning, especially in the few-shot learning setting. Besides, we
also perform analysis to understand which logical relations are more relevant
to commonsense reasoning.
Related papers
- Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - In-Context Analogical Reasoning with Pre-Trained Language Models [10.344428417489237]
We explore the use of intuitive language-based abstractions to support analogy in AI systems.
Specifically, we apply large pre-trained language models (PLMs) to visual Raven's Progressive Matrices ( RPM)
We find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods.
arXiv Detail & Related papers (2023-05-28T04:22:26Z) - ALERT: Adapting Language Models to Reasoning Tasks [43.8679673685468]
ALERT is a benchmark and suite of analyses for assessing language models' reasoning ability.
ALERT provides a test bed to asses any language model on fine-grained reasoning skills.
We find that language models learn more reasoning skills during finetuning stage compared to pretraining state.
arXiv Detail & Related papers (2022-12-16T05:15:41Z) - ArT: All-round Thinker for Unsupervised Commonsense Question-Answering [54.068032948300655]
We propose an approach of All-round Thinker (ArT) by fully taking association during knowledge generating.
We evaluate it on three commonsense QA benchmarks: COPA, SocialIQA and SCT.
arXiv Detail & Related papers (2021-12-26T18:06:44Z) - Does Pre-training Induce Systematic Inference? How Masked Language
Models Acquire Commonsense Knowledge [91.15301779076187]
We introduce verbalized knowledge into the minibatches of a BERT model during pre-training and evaluate how well the model generalizes to supported inferences.
We find generalization does not improve over the course of pre-training, suggesting that commonsense knowledge is acquired from surface-level, co-occurrence patterns rather than induced, systematic reasoning.
arXiv Detail & Related papers (2021-12-16T03:13:04Z) - Generated Knowledge Prompting for Commonsense Reasoning [53.88983683513114]
We propose generating knowledge statements directly from a language model with a generic prompt format.
This approach improves performance of both off-the-shelf and finetuned language models on four commonsense reasoning tasks.
Notably, we find that a model's predictions can improve when using its own generated knowledge.
arXiv Detail & Related papers (2021-10-15T21:58:03Z) - Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge [96.92252296244233]
Large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control.
We show that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
arXiv Detail & Related papers (2020-06-11T17:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.