Disentangling Domain Ontologies
- URL: http://arxiv.org/abs/2304.00004v1
- Date: Tue, 21 Mar 2023 08:36:14 GMT
- Title: Disentangling Domain Ontologies
- Authors: Mayukh Bagchi and Subhashis Das
- Abstract summary: In turn, we propose Conceptual Disentanglement, a multi-level conceptual modelling strategy.
We argue why state-of-the-art development methodologies and approaches are insufficient with respect to our characterization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we introduce and illustrate the novel phenomenon of Conceptual
Entanglement which emerges due to the representational manifoldness immanent
while incrementally modelling domain ontologies step-by-step across the
following five levels: perception, labelling, semantic alignment, hierarchical
modelling and intensional definition. In turn, we propose Conceptual
Disentanglement, a multi-level conceptual modelling strategy which enforces and
explicates, via guiding principles, semantic bijections with respect to each
level of conceptual entanglement (across all the above five levels) paving the
way for engineering conceptually disentangled domain ontologies. We also
briefly argue why state-of-the-art ontology development methodologies and
approaches are insufficient with respect to our characterization.
Related papers
- On the Role of Entity and Event Level Conceptualization in Generalizable Reasoning: A Survey of Tasks, Methods, Applications, and Future Directions [46.63556358247516]
Entity- and event-level conceptualization plays a pivotal role in generalizable reasoning.
There is currently a lack of a systematic overview that comprehensively examines existing works in the definition, execution, and application of conceptualization.
We present the first comprehensive survey of 150+ papers, categorizing various definitions, resources, methods, and downstream applications related to conceptualization into a unified taxonomy.
arXiv Detail & Related papers (2024-06-16T10:32:41Z) - Encoding Hierarchical Schema via Concept Flow for Multifaceted Ideology Detection [26.702058189138462]
Multifaceted ideology detection (MID) aims to detect the ideological leanings of texts towards multiple facets.
We develop a novel concept semantics-enhanced framework for the MID task.
Our approach achieves state-of-the-art performance in MID, including in the cross-topic scenario.
arXiv Detail & Related papers (2024-05-29T10:37:28Z) - A Self-explaining Neural Architecture for Generalizable Concept Learning [29.932706137805713]
We show that present SOTA concept learning approaches suffer from two major problems - lack of concept fidelity and limited concept interoperability.
We propose a novel self-explaining architecture for concept learning across domains.
We demonstrate the efficacy of our proposed approach over current SOTA concept learning approaches on four widely used real-world datasets.
arXiv Detail & Related papers (2024-05-01T06:50:18Z) - A survey on Concept-based Approaches For Model Improvement [2.1516043775965565]
Concepts are known to be the thinking ground of humans.
We provide a systematic review and taxonomy of various concept representations and their discovery algorithms in Deep Neural Networks (DNNs)
We also provide details on concept-based model improvement literature marking the first comprehensive survey of these methods.
arXiv Detail & Related papers (2024-03-21T17:09:20Z) - An Axiomatic Approach to Model-Agnostic Concept Explanations [67.84000759813435]
We propose an approach to concept explanations that satisfy three natural axioms: linearity, recursivity, and similarity.
We then establish connections with previous concept explanation methods, offering insight into their varying semantic meanings.
arXiv Detail & Related papers (2024-01-12T20:53:35Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Toward a Visual Concept Vocabulary for GAN Latent Space [74.12447538049537]
This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space.
Our approach is built from three components: automatic identification of perceptually salient directions based on their layer selectivity; human annotation of these directions with free-form, compositional natural language descriptions.
Experiments show that concepts learned with our approach are reliable and composable -- generalizing across classes, contexts, and observers.
arXiv Detail & Related papers (2021-10-08T17:58:19Z) - Is Disentanglement all you need? Comparing Concept-based &
Disentanglement Approaches [24.786152654589067]
We give an overview of concept-based explanations and disentanglement approaches.
We show that state-of-the-art approaches from both classes can be data inefficient, sensitive to the specific nature of the classification/regression task, or sensitive to the employed concept representation.
arXiv Detail & Related papers (2021-04-14T15:06:34Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep
Convolutional Neural Networks [25.112903533844296]
The Concept-harmonized HierArchical INference (CHAIN) is proposed to interpret the net decision-making process.
For net-decisions being interpreted, the proposed method presents the CHAIN interpretation in which the net decision can be hierarchically deduced.
In quantitative and qualitative experiments, we demonstrate the effectiveness of CHAIN at the instance and class levels.
arXiv Detail & Related papers (2020-02-05T06:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.