Commonsense Knowledge Mining from Term Definitions
- URL: http://arxiv.org/abs/2102.00651v1
- Date: Mon, 1 Feb 2021 05:54:02 GMT
- Title: Commonsense Knowledge Mining from Term Definitions
- Authors: Zhicheng Liang and Deborah L. McGuinness
- Abstract summary: We investigate a few machine learning approaches to mining commonsense knowledge triples using dictionary term definitions as inputs.
Our experiments show that term definitions contain some valid and novel commonsense knowledge triples for some semantic relations.
- Score: 0.20305676256390934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonsense knowledge has proven to be beneficial to a variety of application
areas, including question answering and natural language understanding.
Previous work explored collecting commonsense knowledge triples automatically
from text to increase the coverage of current commonsense knowledge graphs. We
investigate a few machine learning approaches to mining commonsense knowledge
triples using dictionary term definitions as inputs and provide some initial
evaluation of the results. We start from extracting candidate triples using
part-of-speech tag patterns from text, and then compare the performance of
three existing models for triple scoring. Our experiments show that term
definitions contain some valid and novel commonsense knowledge triples for some
semantic relations, and also indicate some challenges with using existing
triple scoring models.
Related papers
- What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - Exploring Large Language Models for Knowledge Graph Completion [17.139056629060626]
We consider triples in knowledge graphs as text sequences and introduce an innovative framework called Knowledge Graph LLM.
Our technique employs entity and relation descriptions of a triple as prompts and utilizes the response for predictions.
Experiments on various benchmark knowledge graphs demonstrate that our method attains state-of-the-art performance in tasks such as triple classification and relation prediction.
arXiv Detail & Related papers (2023-08-26T16:51:17Z) - Towards Open Vocabulary Learning: A Survey [146.90188069113213]
Deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection.
Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training.
This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field.
arXiv Detail & Related papers (2023-06-28T02:33:06Z) - ComFact: A Benchmark for Linking Contextual Commonsense Knowledge [31.19689856957576]
We propose the new task of commonsense fact linking, where models are given contexts and trained to identify situationally-relevant commonsense knowledge from KGs.
Our novel benchmark, ComFact, contains 293k in-context relevance annotations for commonsense across four stylistically diverse datasets.
arXiv Detail & Related papers (2022-10-23T09:30:39Z) - Recitation-Augmented Language Models [85.30591349383849]
We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks.
Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance.
arXiv Detail & Related papers (2022-10-04T00:49:20Z) - Repurposing Knowledge Graph Embeddings for Triple Representation via
Weak Supervision [77.34726150561087]
Current methods learn triple embeddings from scratch without utilizing entity and predicate embeddings from pre-trained models.
We develop a method for automatically sampling triples from a knowledge graph and estimating their pairwise similarities from pre-trained embedding models.
These pairwise similarity scores are then fed to a Siamese-like neural architecture to fine-tune triple representations.
arXiv Detail & Related papers (2022-08-22T14:07:08Z) - Textbook to triples: Creating knowledge graph in the form of triples
from AI TextBook [0.8832969171530054]
This paper develops a system that could convert the text from a given textbook into triples that can be used to visualize as a knowledge graph.
The initial assessment and evaluation gave promising results with an F1 score of 82%.
arXiv Detail & Related papers (2021-11-20T22:28:23Z) - Alleviating the Knowledge-Language Inconsistency: A Study for Deep
Commonsense Knowledge [25.31716910260664]
Deep commonsense knowledge occupies a significant part of commonsense knowledge.
We propose a novel method to mine the deep commonsense knowledge distributed in sentences.
arXiv Detail & Related papers (2021-05-28T06:26:19Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - Knowledge-graph based Proactive Dialogue Generation with Improved
Meta-Learning [0.0]
We propose a knowledge graph based proactive dialogue generation model (KgDg) with three components.
For knowledge triplets embedding and selection, we formulate it as a problem of sentence embedding to better capture semantic information.
Our improved MAML algorithm is capable of learning general features from a limited number of knowledge graphs.
arXiv Detail & Related papers (2020-04-19T08:41:12Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.