A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base
- URL: http://arxiv.org/abs/2011.14084v2
- Date: Tue, 19 Jan 2021 07:21:20 GMT
- Title: A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base
- Authors: Ke Shen and Mayank Kejriwal
- Abstract summary: Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI)
In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base.
Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations.
- Score: 8.591839265985412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Acquiring commonsense knowledge and reasoning is recognized as an important
frontier in achieving general Artificial Intelligence (AI). Recent research in
the Natural Language Processing (NLP) community has demonstrated significant
progress in this problem setting. Despite this progress, which is mainly on
multiple-choice question answering tasks in limited settings, there is still a
lack of understanding (especially at scale) of the nature of commonsense
knowledge itself. In this paper, we propose and conduct a systematic study to
enable a deeper understanding of commonsense knowledge by doing an empirical
and structural analysis of the ConceptNet knowledge base. ConceptNet is a
freely available knowledge base containing millions of commonsense assertions
presented in natural language. Detailed experimental results on three carefully
designed research questions, using state-of-the-art unsupervised graph
representation learning ('embedding') and clustering techniques, reveal deep
substructures in ConceptNet relations, allowing us to make data-driven and
computational claims about the meaning of phenomena such as 'context' that are
traditionally discussed only in qualitative terms. Furthermore, our methodology
provides a case study in how to use data-science and computational
methodologies for understanding the nature of an everyday (yet complex)
psychological phenomenon that is an essential feature of human intelligence.
Related papers
- A Probabilistic-Logic based Commonsense Representation Framework for
Modelling Inferences with Multiple Antecedents and Varying Likelihoods [5.87677276882675]
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can'reason' on text or environmental inputs and make inferences beyond perception.
In this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels.
arXiv Detail & Related papers (2022-11-30T08:44:30Z) - COPEN: Probing Conceptual Knowledge in Pre-trained Language Models [60.10147136876669]
Conceptual knowledge is fundamental to human cognition and knowledge bases.
Existing knowledge probing works only focus on factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge.
We design three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts.
For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark.
arXiv Detail & Related papers (2022-11-08T08:18:06Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Question Answering over Knowledge Base using Language Model Embeddings [0.0]
This paper focuses on using a pre-trained language model for the Knowledge Base Question Answering task.
We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question.
Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question.
arXiv Detail & Related papers (2020-10-17T22:59:34Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Review on Intelligent Object Perception Methods Combining
Knowledge-based Reasoning and Machine Learning [60.335974351919816]
Object perception is a fundamental sub-field of Computer Vision.
Recent works seek ways to integrate knowledge engineering in order to expand the level of intelligence of the visual interpretation of objects.
arXiv Detail & Related papers (2019-12-26T13:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.