Repairing $\mathcal{EL}$ Ontologies Using Weakening and Completing
- URL: http://arxiv.org/abs/2208.00486v1
- Date: Sun, 31 Jul 2022 18:15:24 GMT
- Title: Repairing $\mathcal{EL}$ Ontologies Using Weakening and Completing
- Authors: Ying Li and Patrick Lambrix
- Abstract summary: We show that there is a trade-off between the amount of validation work for a domain expert and the quality of completeness in terms of correctness and completeness.
- Score: 5.625946422295428
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The quality of ontologies in terms of their correctness and completeness is
crucial for developing high-quality ontology-based applications. Traditional
debugging techniques repair ontologies by removing unwanted axioms, but may
thereby remove consequences that are correct in the domain of the ontology. In
this paper we propose an interactive approach to mitigate this for
$\mathcal{EL}$ ontologies by axiom weakening and completing. We present
algorithms for weakening and completing and present the first approach for
repairing that takes into account removing, weakening and completing. We show
different combination strategies, discuss the influence on the final ontologies
and show experimental results. We show that previous work has only considered
special cases and that there is a trade-off between the amount of validation
work for a domain expert and the quality of the ontology in terms of
correctness and completeness.
Related papers
- Universal Topology Refinement for Medical Image Segmentation with Polynomial Feature Synthesis [19.2371330932614]
Medical image segmentation methods often neglect topological correctness, making their segmentations unusable for many downstream tasks.
One option is to retrain such models whilst including a topology-driven loss component.
We present a plug-and-play topology refinement method that is compatible with any domain-specific segmentation pipeline.
arXiv Detail & Related papers (2024-09-15T17:07:58Z) - Repairing Networks of $\mathcal{EL_\perp}$ Ontologies using Weakening and Completing -- Extended version [4.287175019018556]
We propose a framework for repairing ontology networks that deals with this issue.
It defines basic operations such as weakening and completing.
We show the influence of the combination operators on the quality of the repaired network and present an implemented tool.
arXiv Detail & Related papers (2024-07-26T16:15:33Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Ontology Completion with Natural Language Inference and Concept Embeddings: An Analysis [26.918368764004796]
We consider the problem of finding plausible knowledge that is missing from a given ontology, as a generalisation of the well-studied taxonomy expansion task.
One line of work treats this task as a Natural Language Inference (NLI) problem, relying on the knowledge captured by language models to identify the missing knowledge.
Another line of work uses concept embeddings to identify what different concepts have in common, taking inspiration from cognitive models for category based induction.
arXiv Detail & Related papers (2024-03-25T21:46:35Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - Ontology Revision based on Pre-trained Language Models [32.92146634065263]
Ontology revision aims to seamlessly incorporate a new ontology into an existing ontology.
Incoherence is a main potential factor to cause inconsistency and reasoning with an inconsistent ontology will obtain meaningless answers.
To deal with this problem, various ontology revision approaches have been proposed to define revision operators and design ranking strategies for axioms.
In this paper, we study how to apply pre-trained models to revise.
arXiv Detail & Related papers (2023-10-27T00:52:01Z) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z) - Optimization and Generalization of Regularization-Based Continual
Learning: a Loss Approximation Viewpoint [35.5156045701898]
We provide a novel viewpoint of regularization-based continual learning by formulating it as a second-order Taylor approximation of the loss function of each task.
Based on this viewpoint, we study the optimization aspects (i.e., convergence) as well as generalization properties (i.e., finite-sample guarantees) of regularization-based continual learning.
arXiv Detail & Related papers (2020-06-19T06:08:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.