Method for the semantic indexing of concept hierarchies, uniform
representation, use of relational database systems and generic and case-based
reasoning
- URL: http://arxiv.org/abs/1910.01539v2
- Date: Mon, 12 Jun 2023 17:02:57 GMT
- Title: Method for the semantic indexing of concept hierarchies, uniform
representation, use of relational database systems and generic and case-based
reasoning
- Authors: Uwe Petersohn, Sandra Zimmer, Jens Lehmann
- Abstract summary: Starting point of semantic indexing is the knowledge represented by concept hierarchies.
keys are computed such that concepts are partially unifiable with all more specific concepts and only semantically correct concepts are allowed to be added.
Because of the uniform representation, inference can be done using case-based reasoning and generic problem solving methods.
- Score: 7.584720949329676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a method for semantic indexing and describes its
application in the field of knowledge representation. Starting point of the
semantic indexing is the knowledge represented by concept hierarchies. The goal
is to assign keys to nodes (concepts) that are hierarchically ordered and
syntactically and semantically correct. With the indexing algorithm, keys are
computed such that concepts are partially unifiable with all more specific
concepts and only semantically correct concepts are allowed to be added. The
keys represent terminological relationships. Correctness and completeness of
the underlying indexing algorithm are proven. The use of classical relational
databases for the storage of instances is described. Because of the uniform
representation, inference can be done using case-based reasoning and generic
problem solving methods.
Related papers
- Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - ConceptHash: Interpretable Fine-Grained Hashing via Concept Discovery [128.30514851911218]
ConceptHash is a novel method that achieves sub-code level interpretability.
In ConceptHash, each sub-code corresponds to a human-understandable concept, such as an object part.
We incorporate language guidance to ensure that the learned hash codes are distinguishable within fine-grained object classes.
arXiv Detail & Related papers (2024-06-12T17:49:26Z) - Domain Embeddings for Generating Complex Descriptions of Concepts in
Italian Language [65.268245109828]
We propose a Distributional Semantic resource enriched with linguistic and lexical information extracted from electronic dictionaries.
The resource comprises 21 domain-specific matrices, one comprehensive matrix, and a Graphical User Interface.
Our model facilitates the generation of reasoned semantic descriptions of concepts by selecting matrices directly associated with concrete conceptual knowledge.
arXiv Detail & Related papers (2024-02-26T15:04:35Z) - Simple Mechanisms for Representing, Indexing and Manipulating Concepts [46.715152257557804]
We will argue that learning a concept could be done by looking at its moment statistics matrix to generate a concrete representation or signature of that concept.
When the concepts are intersected', signatures of the concepts can be used to find a common theme across a number of related intersected' concepts.
arXiv Detail & Related papers (2023-10-18T17:54:29Z) - Language Models As Semantic Indexers [78.83425357657026]
We introduce LMIndexer, a self-supervised framework to learn semantic IDs with a generative language model.
We show the high quality of the learned IDs and demonstrate their effectiveness on three tasks including recommendation, product search, and document retrieval.
arXiv Detail & Related papers (2023-10-11T18:56:15Z) - Ranking-based Argumentation Semantics Applied to Logical Argumentation
(full version) [2.9005223064604078]
We investigate the behaviour of ranking-based semantics for structured argumentation.
We show that a wide class of ranking-based semantics gives rise to so-called culpability measures.
arXiv Detail & Related papers (2023-07-31T15:44:33Z) - Semantic Search for Large Scale Clinical Ontologies [63.71950996116403]
We present a deep learning approach to build a search system for large clinical vocabularies.
We propose a Triplet-BERT model and a method that generates training data based on semantic training data.
The model is evaluated using five real benchmark data sets and the results show that our approach achieves high results on both free text to concept and concept to searching concept vocabularies.
arXiv Detail & Related papers (2022-01-01T05:15:42Z) - Quotient Space-Based Keyword Retrieval in Sponsored Search [7.639289301435027]
Synonymous keyword retrieval has become an important problem for sponsored search.
We propose a novel quotient space-based retrieval framework to address this problem.
This method has been successfully implemented in Baidu's online sponsored search system.
arXiv Detail & Related papers (2021-05-26T07:27:54Z) - Unsupervised Key-phrase Extraction and Clustering for Classification
Scheme in Scientific Publications [0.0]
We investigate possible ways of automating parts of the Systematic Mapping (SM) and Systematic Review (SR) process.
Key-phrases are extracted from scientific documents using unsupervised methods, which are then used to construct the corresponding Classification Scheme.
We also explore how clustering can be used to group related key-phrases.
arXiv Detail & Related papers (2021-01-25T10:17:33Z) - On the Learnability of Concepts: With Applications to Comparing Word
Embedding Algorithms [0.0]
We introduce the notion of "concept" as a list of words that have shared semantic content.
We first use this notion to measure the learnability of concepts on pretrained word embeddings.
We then develop a statistical analysis of concept learnability, based on hypothesis testing and ROC curves, in order to compare the relative merits of various embedding algorithms.
arXiv Detail & Related papers (2020-06-17T14:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.