Repairing Networks of $\mathcal{EL_\perp}$ Ontologies using Weakening and Completing -- Extended version
- URL: http://arxiv.org/abs/2407.18848v1
- Date: Fri, 26 Jul 2024 16:15:33 GMT
- Title: Repairing Networks of $\mathcal{EL_\perp}$ Ontologies using Weakening and Completing -- Extended version
- Authors: Ying Li, Patrick Lambrix,
- Abstract summary: We propose a framework for repairing ontology networks that deals with this issue.
It defines basic operations such as weakening and completing.
We show the influence of the combination operators on the quality of the repaired network and present an implemented tool.
- Score: 4.287175019018556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quality of ontologies and their alignments is crucial for developing high-quality semantics-based applications. Traditional debugging techniques repair ontology networks by removing unwanted axioms and mappings, but may thereby remove consequences that are correct in the domain of the ontology network. In this paper we propose a framework for repairing ontology networks that deals with this issue. It defines basic operations such as debugging, weakening and completing. Further, it defines combination operators that reflect choices in how and when to use the basic operators, as well as choices regarding the autonomy level of the ontologies and alignments in the ontology network. We show the influence of the combination operators on the quality of the repaired network and present an implemented tool. By using our framework together with existing algorithms for debugging, weakening and completing, we essentially provide a blueprint for extending previous work and systems.
Related papers
- Efficient compilation of expressive problem space specifications to
neural network solvers [0.0]
We describe an algorithm for compiling the former to the latter.
We explore and overcome complications that arise from targeting neural network solvers as opposed to standard SMT solvers.
arXiv Detail & Related papers (2024-01-24T09:13:09Z) - Rotation Equivariant Proximal Operator for Deep Unfolding Methods in Image Restoration [62.41329042683779]
We propose a high-accuracy rotation equivariant proximal network that embeds rotation symmetry priors into the deep unfolding framework.
This study makes efforts to suggest a high-accuracy rotation equivariant proximal network that effectively embeds rotation symmetry priors into the deep unfolding framework.
arXiv Detail & Related papers (2023-12-25T11:53:06Z) - Ontology Revision based on Pre-trained Language Models [32.92146634065263]
Ontology revision aims to seamlessly incorporate a new ontology into an existing ontology.
Incoherence is a main potential factor to cause inconsistency and reasoning with an inconsistent ontology will obtain meaningless answers.
To deal with this problem, various ontology revision approaches have been proposed to define revision operators and design ranking strategies for axioms.
In this paper, we study how to apply pre-trained models to revise.
arXiv Detail & Related papers (2023-10-27T00:52:01Z) - Repairing $\mathcal{EL}$ Ontologies Using Weakening and Completing [5.625946422295428]
We show that there is a trade-off between the amount of validation work for a domain expert and the quality of completeness in terms of correctness and completeness.
arXiv Detail & Related papers (2022-07-31T18:15:24Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Reconstruction Task Finds Universal Winning Tickets [24.52604301906691]
Pruning well-trained neural networks is effective to achieve a promising accuracy-efficiency trade-off in computer vision regimes.
Most of existing pruning algorithms only focus on the classification task defined on the source domain.
In this paper, we show that the image-level pretrain task is not capable of pruning models for diverse downstream tasks.
arXiv Detail & Related papers (2022-02-23T13:04:32Z) - A Unified Architecture of Semantic Segmentation and Hierarchical
Generative Adversarial Networks for Expression Manipulation [52.911307452212256]
We develop a unified architecture of semantic segmentation and hierarchical GANs.
A unique advantage of our framework is that on forward pass the semantic segmentation network conditions the generative model.
We evaluate our method on two challenging facial expression translation benchmarks, AffectNet and RaFD, and a semantic segmentation benchmark, CelebAMask-HQ.
arXiv Detail & Related papers (2021-12-08T22:06:31Z) - Explainability-aided Domain Generalization for Image Classification [0.0]
We show that applying methods and architectures from the explainability literature can achieve state-of-the-art performance for the challenging task of domain generalization.
We develop a set of novel algorithms including DivCAM, an approach where the network receives guidance during training via gradient based class activation maps to focus on a diverse set of discriminative features.
Since these methods offer competitive performance on top of explainability, we argue that the proposed methods can be used as a tool to improve the robustness of deep neural network architectures.
arXiv Detail & Related papers (2021-04-05T02:27:01Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.