On the Power and Limitations of Examples for Description Logic Concepts
- URL: http://arxiv.org/abs/2412.17345v1
- Date: Mon, 23 Dec 2024 07:17:58 GMT
- Title: On the Power and Limitations of Examples for Description Logic Concepts
- Authors: Balder ten Cate, Raoul Koudijs, Ana Ozaki,
- Abstract summary: We investigate the power of labeled examples for describing description-logic concepts.
Specifically, we study the existence and efficient computability of finite characterisations.
- Score: 6.776119962781556
- License:
- Abstract: Labeled examples (i.e., positive and negative examples) are an attractive medium for communicating complex concepts. They are useful for deriving concept expressions (such as in concept learning, interactive concept specification, and concept refinement) as well as for illustrating concept expressions to a user or domain expert. We investigate the power of labeled examples for describing description-logic concepts. Specifically, we systematically study the existence and efficient computability of finite characterisations, i.e. finite sets of labeled examples that uniquely characterize a single concept, for a wide variety of description logics between EL and ALCQI, both without an ontology and in the presence of a DL-Lite ontology. Finite characterisations are relevant for debugging purposes, and their existence is a necessary condition for exact learnability with membership queries.
Related papers
- A Note on an Inferentialist Approach to Resource Semantics [48.65926948745294]
'Inferentialism' is the view that meaning is given in terms of inferential behaviour.
This paper shows how 'inferentialism' enables a versatile and expressive framework for resource semantics.
arXiv Detail & Related papers (2024-05-10T14:13:21Z) - Knowledge graphs for empirical concept retrieval [1.06378109904813]
Concept-based explainable AI is promising as a tool to improve the understanding of complex models at the premises of a given user.
Here, we present a workflow for user-driven data collection in both text and image domains.
We test the retrieved concept datasets on two concept-based explainability methods, namely concept activation vectors (CAVs) and concept activation regions (CARs)
arXiv Detail & Related papers (2024-04-10T13:47:22Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - An ASP approach for reasoning on neural networks under a finitely
many-valued semantics for weighted conditional knowledge bases [0.0]
We consider conditional ALC knowledge bases with typicality in the finitely many-valued case.
We exploit ASP and "asprin" for reasoning with the concept-wise multipreferences.
arXiv Detail & Related papers (2022-02-02T16:30:28Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Granule Description based on Compound Concepts [5.657202839641533]
We propose two new types of compound concepts in this paper: bipolar concept and common-and-necessary concept.
We have derived concise and unified equivalent conditions for describable granules and approaching description methods for indescribable granules for all five kinds of concepts.
arXiv Detail & Related papers (2021-10-29T01:56:29Z) - Generating Contrastive Explanations for Inductive Logic Programming
Based on a Near Miss Approach [0.7734726150561086]
We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming (textscGeNME)
A modified rule which covers the near miss but not the original instance is given as an explanation.
We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.
arXiv Detail & Related papers (2021-06-15T11:42:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.