Ontology Revision based on Pre-trained Language Models
- URL: http://arxiv.org/abs/2310.18378v2
- Date: Tue, 26 Dec 2023 16:56:19 GMT
- Title: Ontology Revision based on Pre-trained Language Models
- Authors: Qiu Ji, Guilin Qi, Yuxin Ye, Jiaye Li, Site Li, Jianjie Ren, Songtao
Lu
- Abstract summary: Ontology revision aims to seamlessly incorporate a new ontology into an existing ontology.
Incoherence is a main potential factor to cause inconsistency and reasoning with an inconsistent ontology will obtain meaningless answers.
To deal with this problem, various ontology revision approaches have been proposed to define revision operators and design ranking strategies for axioms.
In this paper, we study how to apply pre-trained models to revise.
- Score: 32.92146634065263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ontology revision aims to seamlessly incorporate a new ontology into an
existing ontology and plays a crucial role in tasks such as ontology evolution,
ontology maintenance, and ontology alignment. Similar to repair single
ontologies, resolving logical incoherence in the task of ontology revision is
also important and meaningful, because incoherence is a main potential factor
to cause inconsistency and reasoning with an inconsistent ontology will obtain
meaningless answers.To deal with this problem, various ontology revision
approaches have been proposed to define revision operators and design ranking
strategies for axioms in an ontology. However, they rarely consider axiom
semantics which provides important information to differentiate axioms. In
addition, pre-trained models can be utilized to encode axiom semantics, and
have been widely applied in many natural language processing tasks and
ontology-related ones in recent years.Therefore, in this paper, we study how to
apply pre-trained models to revise ontologies. We first define four scoring
functions to rank axioms based on a pre-trained model by considering various
information from an ontology. Based on the functions, an ontology revision
algorithm is then proposed to deal with unsatisfiable concepts at once. To
improve efficiency, an adapted revision algorithm is designed to deal with
unsatisfiable concepts group by group. We conduct experiments over 19 ontology
pairs and compare our algorithms and scoring functions with existing ones.
According to the experiments, our algorithms could achieve promising
performance.
Related papers
- Ontology Completion with Natural Language Inference and Concept Embeddings: An Analysis [26.918368764004796]
We consider the problem of finding plausible knowledge that is missing from a given ontology, as a generalisation of the well-studied taxonomy expansion task.
One line of work treats this task as a Natural Language Inference (NLI) problem, relying on the knowledge captured by language models to identify the missing knowledge.
Another line of work uses concept embeddings to identify what different concepts have in common, taking inspiration from cognitive models for category based induction.
arXiv Detail & Related papers (2024-03-25T21:46:35Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Repairing $\mathcal{EL}$ Ontologies Using Weakening and Completing [5.625946422295428]
We show that there is a trade-off between the amount of validation work for a domain expert and the quality of completeness in terms of correctness and completeness.
arXiv Detail & Related papers (2022-07-31T18:15:24Z) - Semantic Search for Large Scale Clinical Ontologies [63.71950996116403]
We present a deep learning approach to build a search system for large clinical vocabularies.
We propose a Triplet-BERT model and a method that generates training data based on semantic training data.
The model is evaluated using five real benchmark data sets and the results show that our approach achieves high results on both free text to concept and concept to searching concept vocabularies.
arXiv Detail & Related papers (2022-01-01T05:15:42Z) - Probing Classifiers: Promises, Shortcomings, and Alternatives [28.877572447481683]
Probing classifiers have emerged as one of the prominent methodologies for interpreting and analyzing deep neural network models of natural language processing.
This article critically reviews the probing classifiers framework, highlighting shortcomings, improvements, and alternative approaches.
arXiv Detail & Related papers (2021-02-24T18:36:14Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.