Dimensions of Commonsense Knowledge
- URL: http://arxiv.org/abs/2101.04640v1
- Date: Tue, 12 Jan 2021 17:52:39 GMT
- Title: Dimensions of Commonsense Knowledge
- Authors: Filip Ilievski, Alessandro Oltramari, Kaixin Ma, Bin Zhang, Deborah L.
McGuinness, Pedro Szekely
- Abstract summary: We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
- Score: 60.49243784752026
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Commonsense knowledge is essential for many AI applications, including those
in natural language processing, visual processing, and planning. Consequently,
many sources that include commonsense knowledge have been designed and
constructed over the past decades. Recently, the focus has been on large
text-based sources, which facilitate easier integration with neural (language)
models and application on textual tasks, typically at the expense of the
semantics of the sources. Such practice prevents the harmonization of these
sources, understanding their coverage and gaps, and may hinder the semantic
alignment of their knowledge with downstream tasks. Efforts to consolidate
commonsense knowledge have yielded partial success, but provide no clear path
towards a comprehensive consolidation of existing commonsense knowledge.
The ambition of this paper is to organize these sources around a common set
of dimensions of commonsense knowledge. For this purpose, we survey a wide
range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting
over more specific relations found in sources. This consolidation allows us to
unify the separate sources and to compute indications of their coverage,
overlap, and gaps with respect to the knowledge dimensions. Moreover, we
analyze the impact of each dimension on downstream reasoning tasks that require
commonsense knowledge, observing that the temporal and desire/goal dimensions
are very beneficial for reasoning on current downstream tasks, while
distinctness and lexical knowledge have little impact. These results reveal
focus towards some dimensions in current evaluation, and potential neglect of
others.
Related papers
- What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - Towards Knowledge-Grounded Natural Language Understanding and Generation [1.450405446885067]
This thesis investigates how natural language understanding and generation with transformer models can benefit from grounding the models with knowledge representations.
Studies in this thesis find that incorporating relevant and up-to-date knowledge of entities benefits fake news detection.
It is established that other general forms of knowledge, such as parametric and distilled knowledge, enhance multimodal and multilingual knowledge-intensive tasks.
arXiv Detail & Related papers (2024-03-22T17:32:43Z) - Large Language Models as Source Planner for Personalized
Knowledge-grounded Dialogue [72.26474540602517]
SAFARI is a novel framework for planning, understanding, and incorporating under both supervised and unsupervised settings.
We construct a personalized knowledge-grounded dialogue dataset textittextbfKnowledge textbfBehind textbfPersona(textbfKBP)
Experimental results on the KBP dataset demonstrate that the SAFARI framework can effectively produce persona-consistent and knowledge-enhanced responses.
arXiv Detail & Related papers (2023-10-13T03:38:38Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Materialized Knowledge Bases from Commonsense Transformers [8.678138390075077]
No materialized resource of commonsense knowledge generated this way is publicly available.
This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall.
We identify common problem cases, and outline use cases enabled by materialized resources.
arXiv Detail & Related papers (2021-12-29T20:22:05Z) - Commonsense Knowledge in Word Associations and ConceptNet [37.751909219863585]
This paper presents an in-depth comparison of two large-scale resources of general knowledge: ConcpetNet and SWOW.
We examine the structure, overlap and differences between the two graphs, as well as the extent to which they encode situational commonsense knowledge.
arXiv Detail & Related papers (2021-09-20T06:06:30Z) - Learning Contextual Causality from Time-consecutive Images [84.26437953699444]
Causality knowledge is crucial for many artificial intelligence systems.
In this paper, we investigate the possibility of learning contextual causality from the visual signal.
We first propose a high-quality dataset Vis-Causal and then conduct experiments to demonstrate that it is possible to automatically discover meaningful causal knowledge from the videos.
arXiv Detail & Related papers (2020-12-13T20:24:48Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z) - Knowledge-Based Matching of $n$-ary Tuples [9.328991021103294]
We focus on a matching nary rule in a knowledge base with an expanding vocabularies-based methodology.
We tested our method on the domain of pharmacogenomics by searching alignments among 50435 nary vocabularies from four different real-world sources.
arXiv Detail & Related papers (2020-02-19T11:01:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.