Mining Commonsense Facts from the Physical World
- URL: http://arxiv.org/abs/2002.03149v3
- Date: Tue, 14 Apr 2020 00:58:51 GMT
- Title: Mining Commonsense Facts from the Physical World
- Authors: Yanyan Zou, Wei Lu and Xu Sun
- Abstract summary: Textual descriptions of the physical world implicitly mention commonsense facts, while the commonsense knowledge bases explicitly represent such facts as triples.
Most of the prior studies on populating knowledge bases mainly focus on Freebase.
We build an effective new model that fuses information from both sequence text and existing knowledge base resource.
- Score: 23.813586698701606
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Textual descriptions of the physical world implicitly mention commonsense
facts, while the commonsense knowledge bases explicitly represent such facts as
triples. Compared to dramatically increased text data, the coverage of existing
knowledge bases is far away from completion. Most of the prior studies on
populating knowledge bases mainly focus on Freebase. To automatically complete
commonsense knowledge bases to improve their coverage is under-explored. In
this paper, we propose a new task of mining commonsense facts from the raw text
that describes the physical world. We build an effective new model that fuses
information from both sequence text and existing knowledge base resource. Then
we create two large annotated datasets each with approximate 200k instances for
commonsense knowledge base completion. Empirical results demonstrate that our
model significantly outperforms baselines.
Related papers
- FactNet: A Billion-Scale Knowledge Graph for Multilingual Factual Grounding [81.2130536158575]
LLMs exhibit remarkable fluency, their utility is often compromised by factual hallucinations and a lack of traceable provenance.<n>We introduce FactNet, a massive, open-source resource designed to unify 1.7 billion atomic assertions with 3.01 billion auditable evidence pointers derived exclusively from 316 Wikipedia editions.
arXiv Detail & Related papers (2026-02-03T11:44:11Z) - What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - Large Language Models as a Tool for Mining Object Knowledge [0.42970700836450487]
Large language models fall short as trustworthy intelligent systems due to opacity of basis for their answers and tendency to confabulate facts when questioned.
This paper investigates explicit knowledge about common artifacts in the everyday world.
We produce a repository of data on the parts and materials of about 2,300 objects and their subtypes.
This contribution to knowledge mining should prove useful to AI research on reasoning about object structure and composition.
arXiv Detail & Related papers (2024-10-16T18:46:02Z) - AKEW: Assessing Knowledge Editing in the Wild [79.96813982502952]
AKEW (Assessing Knowledge Editing in the Wild) is a new practical benchmark for knowledge editing.
It fully covers three editing settings of knowledge updates: structured facts, unstructured texts as facts, and extracted triplets.
Through extensive experiments, we demonstrate the considerable gap between state-of-the-art knowledge-editing methods and practical scenarios.
arXiv Detail & Related papers (2024-02-29T07:08:34Z) - Commonsense Knowledge in Word Associations and ConceptNet [37.751909219863585]
This paper presents an in-depth comparison of two large-scale resources of general knowledge: ConcpetNet and SWOW.
We examine the structure, overlap and differences between the two graphs, as well as the extent to which they encode situational commonsense knowledge.
arXiv Detail & Related papers (2021-09-20T06:06:30Z) - Knowledge Base Completion Meets Transfer Learning [43.89253223499761]
The aim of knowledge base completion is to predict unseen facts from existing facts in knowledge bases.
We introduce the first approach for transfer of knowledge from one collection of facts to another without the need for entity or relation matching.
arXiv Detail & Related papers (2021-08-30T09:13:29Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - Dimensions of Commonsense Knowledge [60.49243784752026]
We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
arXiv Detail & Related papers (2021-01-12T17:52:39Z) - CoLAKE: Contextualized Language and Knowledge Embedding [81.90416952762803]
We propose the Contextualized Language and Knowledge Embedding (CoLAKE)
CoLAKE jointly learns contextualized representation for both language and knowledge with the extended objective.
We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks.
arXiv Detail & Related papers (2020-10-01T11:39:32Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.