Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization
- URL: http://arxiv.org/abs/2206.01532v2
- Date: Sat, 18 May 2024 21:18:43 GMT
- Title: Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization
- Authors: Mutian He, Tianqing Fang, Weiqi Wang, Yangqiu Song,
- Abstract summary: We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
- Score: 49.00409552570441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conceptualization, or viewing entities and situations as instances of abstract concepts in mind and making inferences based on that, is a vital component in human intelligence for commonsense reasoning. Despite recent progress in artificial intelligence to acquire and model commonsense attributed to neural language models and commonsense knowledge graphs (CKGs), conceptualization is yet to be introduced thoroughly, making current approaches ineffective to cover knowledge about countless diverse entities and situations in the real world. To address the problem, we thoroughly study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction by acquiring abstract knowledge about events regarding abstract concepts, as well as higher-level triples or inferences upon them. We then apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase. We annotate a dataset on the validity of contextualized conceptualizations from ATOMIC on both event and triple levels, develop a series of heuristic rules based on linguistic features, and train a set of neural models to generate and verify abstract knowledge. Based on these components, a pipeline to acquire abstract knowledge is built. A large abstract CKG upon ATOMIC is then induced, ready to be instantiated to infer about unseen entities or situations. Finally, we empirically show the benefits of augmenting CKGs with abstract knowledge in downstream tasks like commonsense inference and zero-shot commonsense QA.
Related papers
- Discovering Conceptual Knowledge with Analytic Ontology Templates for Articulated Objects [42.9186628100765]
We aim to endow machine intelligence with an analogous capability through performing at the conceptual level.
AOT-driven approach yields benefits in three key perspectives.
arXiv Detail & Related papers (2024-09-18T04:53:38Z) - Turing Video-based Cognitive Tests to Handle Entangled Concepts [0.0]
We present the results of an innovative video-based cognitive test on a specific conceptual combination.
We show that collected data can be faithfully modelled within a quantum-theoretic framework.
We provide a novel explanation for the appearance of entanglement in both physics and cognitive realms.
arXiv Detail & Related papers (2024-09-13T14:30:55Z) - CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning [45.62134354858683]
CANDLE is a framework that iteratively performs conceptualization and instantiation over commonsense knowledge bases.
By applying CANDLE to ATOMIC, we construct a comprehensive knowledge base comprising six million conceptualizations and instantiated commonsense knowledge triples.
arXiv Detail & Related papers (2024-01-14T13:24:30Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - A Probabilistic-Logic based Commonsense Representation Framework for
Modelling Inferences with Multiple Antecedents and Varying Likelihoods [5.87677276882675]
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can'reason' on text or environmental inputs and make inferences beyond perception.
In this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels.
arXiv Detail & Related papers (2022-11-30T08:44:30Z) - On Binding Objects to Symbols: Learning Physical Concepts to Understand
Real from Fake [155.6741526791004]
We revisit the classic signal-to-symbol barrier in light of the remarkable ability of deep neural networks to generate synthetic data.
We characterize physical objects as abstract concepts and use the previous analysis to show that physical objects can be encoded by finite architectures.
We conclude that binding physical entities to digital identities is possible in finite time with finite resources.
arXiv Detail & Related papers (2022-07-25T17:21:59Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base [8.591839265985412]
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI)
In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base.
Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations.
arXiv Detail & Related papers (2020-11-28T08:08:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.