How Commonsense Knowledge Helps with Natural Language Tasks: A Survey of
Recent Resources and Methodologies
- URL: http://arxiv.org/abs/2108.04674v1
- Date: Tue, 10 Aug 2021 13:25:29 GMT
- Title: How Commonsense Knowledge Helps with Natural Language Tasks: A Survey of
Recent Resources and Methodologies
- Authors: Yubo Xie, Pearl Pu
- Abstract summary: We first review some popular commonsense knowledge bases and commonsense reasoning benchmarks, but give more emphasis on the methodologies.
We discuss some future directions in pushing the boundary of commonsense reasoning in natural language processing.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we give an overview of commonsense reasoning in natural
language processing, which requires a deeper understanding of the contexts and
usually involves inference over implicit external knowledge. We first review
some popular commonsense knowledge bases and commonsense reasoning benchmarks,
but give more emphasis on the methodologies, including recent approaches that
aim at solving some general natural language problems that take advantage of
external knowledge bases. Finally, we discuss some future directions in pushing
the boundary of commonsense reasoning in natural language processing.
Related papers
- Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - Natural Language Reasoning, A Survey [16.80326702160048]
Conceptually, we provide a distinct definition for natural language reasoning in NLP.
We conduct a comprehensive literature review on natural language reasoning in NLP.
The paper also identifies and views backward reasoning, a powerful paradigm for multi-step reasoning.
arXiv Detail & Related papers (2023-03-26T13:44:18Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - Generated Knowledge Prompting for Commonsense Reasoning [53.88983683513114]
We propose generating knowledge statements directly from a language model with a generic prompt format.
This approach improves performance of both off-the-shelf and finetuned language models on four commonsense reasoning tasks.
Notably, we find that a model's predictions can improve when using its own generated knowledge.
arXiv Detail & Related papers (2021-10-15T21:58:03Z) - Survey on reinforcement learning for language processing [17.738843098424816]
This paper reviews the state of the art of reinforcement learning methods for different problems of natural language processing.
We provide detailed descriptions of the problems as well as discussions of why RL is well-suited to solve them.
We elaborate on promising research directions in natural language processing that might benefit from reinforcement learning.
arXiv Detail & Related papers (2021-04-12T15:33:11Z) - Dimensions of Commonsense Knowledge [60.49243784752026]
We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
arXiv Detail & Related papers (2021-01-12T17:52:39Z) - A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base [8.591839265985412]
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI)
In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base.
Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations.
arXiv Detail & Related papers (2020-11-28T08:08:25Z) - Question Answering over Knowledge Base using Language Model Embeddings [0.0]
This paper focuses on using a pre-trained language model for the Knowledge Base Question Answering task.
We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question.
Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question.
arXiv Detail & Related papers (2020-10-17T22:59:34Z) - Unsupervised Commonsense Question Answering with Self-Talk [71.63983121558843]
We propose an unsupervised framework based on self-talk as a novel alternative to commonsense tasks.
Inspired by inquiry-based discovery learning, our approach inquires language models with a number of information seeking questions.
Empirical results demonstrate that the self-talk procedure substantially improves the performance of zero-shot language model baselines.
arXiv Detail & Related papers (2020-04-11T20:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.