Commonsense Knowledge in Word Associations and ConceptNet
- URL: http://arxiv.org/abs/2109.09309v1
- Date: Mon, 20 Sep 2021 06:06:30 GMT
- Title: Commonsense Knowledge in Word Associations and ConceptNet
- Authors: Chunhua Liu and Trevor Cohn and Lea Frermann
- Abstract summary: This paper presents an in-depth comparison of two large-scale resources of general knowledge: ConcpetNet and SWOW.
We examine the structure, overlap and differences between the two graphs, as well as the extent to which they encode situational commonsense knowledge.
- Score: 37.751909219863585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans use countless basic, shared facts about the world to efficiently
navigate in their environment. This commonsense knowledge is rarely
communicated explicitly, however, understanding how commonsense knowledge is
represented in different paradigms is important for both deeper understanding
of human cognition and for augmenting automatic reasoning systems. This paper
presents an in-depth comparison of two large-scale resources of general
knowledge: ConcpetNet, an engineered relational database, and SWOW a knowledge
graph derived from crowd-sourced word associations. We examine the structure,
overlap and differences between the two graphs, as well as the extent to which
they encode situational commonsense knowledge. We finally show empirically that
both resources improve downstream task performance on commonsense reasoning
benchmarks over text-only baselines, suggesting that large-scale word
association data, which have been obtained for several languages through
crowd-sourcing, can be a valuable complement to curated knowledge graphs
Related papers
- What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - A Bipartite Graph is All We Need for Enhancing Emotional Reasoning with
Commonsense Knowledge [16.410940528107115]
We propose a Bipartite Heterogeneous Graph (BHG) method for enhancing emotional reasoning with commonsense knowledge.
BHG-based knowledge infusion can be directly generalized to multi-type and multi-grained knowledge sources.
arXiv Detail & Related papers (2023-08-09T09:09:17Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Entity Context Graph: Learning Entity Representations
fromSemi-Structured Textual Sources on the Web [44.92858943475407]
We propose an approach that processes entity centric textual knowledge sources to learn entity embeddings.
We show that the embeddings learned from our approach are: (i) high quality and comparable to a known knowledge graph-based embeddings and can be used to improve them further.
arXiv Detail & Related papers (2021-03-29T20:52:14Z) - Dimensions of Commonsense Knowledge [60.49243784752026]
We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
arXiv Detail & Related papers (2021-01-12T17:52:39Z) - CoLAKE: Contextualized Language and Knowledge Embedding [81.90416952762803]
We propose the Contextualized Language and Knowledge Embedding (CoLAKE)
CoLAKE jointly learns contextualized representation for both language and knowledge with the extended objective.
We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks.
arXiv Detail & Related papers (2020-10-01T11:39:32Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.