From Knowledge Representation to Knowledge Organization and Back
- URL: http://arxiv.org/abs/2312.07302v2
- Date: Tue, 9 Jan 2024 00:04:31 GMT
- Title: From Knowledge Representation to Knowledge Organization and Back
- Authors: Fausto Giunchiglia and Mayukh Bagchi
- Abstract summary: Knowledge Representation (KR) and facet-analytical Knowledge Organization (KO) have been the two most prominent methodologies of data and knowledge modelling.
This paper elucidates both the KR and facet-analytical KO methodologies in detail and provides a functional mapping between them.
- Score: 11.970701039437493
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Knowledge Representation (KR) and facet-analytical Knowledge Organization
(KO) have been the two most prominent methodologies of data and knowledge
modelling in the Artificial Intelligence community and the Information Science
community, respectively. KR boasts of a robust and scalable ecosystem of
technologies to support knowledge modelling while, often, underemphasizing the
quality of its models (and model-based data). KO, on the other hand, is less
technology-driven but has developed a robust framework of guiding principles
(canons) for ensuring modelling (and model-based data) quality. This paper
elucidates both the KR and facet-analytical KO methodologies in detail and
provides a functional mapping between them. Out of the mapping, the paper
proposes an integrated KO-enriched KR methodology with all the standard
components of a KR methodology plus the guiding canons of modelling quality
provided by KO. The practical benefits of the methodological integration has
been exemplified through a prominent case study of KR-based image annotation
exercise.
Related papers
- High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - From Knowledge Organization to Knowledge Representation and Back [10.13291863168277]
Knowledge Organization (KO) and Knowledge Representation (KR) have been the two mainstream methodologies of knowledge modelling.
This paper elucidates both the facet-analytical KO and KR methodologies in detail and provides a functional mapping between them.
The practical benefits of the methodological integration has been exemplified through the flagship application of the Digital University at the University of Trento, Italy.
arXiv Detail & Related papers (2024-01-22T08:28:28Z) - Unifying Structure and Language Semantic for Efficient Contrastive
Knowledge Graph Completion with Structured Entity Anchors [0.3913403111891026]
The goal of knowledge graph completion (KGC) is to predict missing links in a KG using trained facts that are already known.
We propose a novel method to effectively unify structure information and language semantics without losing the power of inductive reasoning.
arXiv Detail & Related papers (2023-11-07T11:17:55Z) - A Closer Look at Knowledge Distillation with Features, Logits, and
Gradients [81.39206923719455]
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
This work provides a new perspective to motivate a set of knowledge distillation strategies by approximating the classical KL-divergence criteria with different knowledge sources.
Our analysis indicates that logits are generally a more efficient knowledge source and suggests that having sufficient feature dimensions is crucial for the model design.
arXiv Detail & Related papers (2022-03-18T21:26:55Z) - Image Quality Assessment in the Modern Age [53.19271326110551]
This tutorial provides the audience with the basic theories, methodologies, and current progresses of image quality assessment (IQA)
We will first revisit several subjective quality assessment methodologies, with emphasis on how to properly select visual stimuli.
Both hand-engineered and (deep) learning-based methods will be covered.
arXiv Detail & Related papers (2021-10-19T02:38:46Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - REGINA - Reasoning Graph Convolutional Networks in Human Action
Recognition [1.2891210250935146]
This paper describes a novel way to REasoning Graph convolutional networks IN Human Action recognition.
The proposed strategy can be easily integrated in the existing GCN-based methods.
arXiv Detail & Related papers (2021-05-14T08:46:42Z) - Knowledge-Guided Dynamic Systems Modeling: A Case Study on Modeling
River Water Quality [8.110949636804774]
Modeling real-world phenomena is a focus of many science and engineering efforts, such as ecological modeling and financial forecasting.
Building an accurate model for complex and dynamic systems improves understanding of underlying processes and leads to resource efficiency.
At the opposite extreme, data-driven modeling learns a model directly from data, requiring extensive data and potentially generating overfitting.
We focus on an intermediate approach, model revision, in which prior knowledge and data are combined to achieve the best of both worlds.
arXiv Detail & Related papers (2021-03-01T06:31:38Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.