Knowledge Patterns
- URL: http://arxiv.org/abs/2005.04306v1
- Date: Fri, 8 May 2020 22:33:30 GMT
- Title: Knowledge Patterns
- Authors: Peter Clark, John Thompson, Bruce Porter
- Abstract summary: This paper describes a new technique, called "knowledge patterns", for helping construct axiom-rich, formal Ontology.
Knowledge patterns provide an important insight into the structure of a formal Ontology.
We describe the technique and an application built using them, and then critique their strengths and weaknesses.
- Score: 19.57676317580847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes a new technique, called "knowledge patterns", for
helping construct axiom-rich, formal ontologies, based on identifying and
explicitly representing recurring patterns of knowledge (theory schemata) in
the ontology, and then stating how those patterns map onto domain-specific
concepts in the ontology. From a modeling perspective, knowledge patterns
provide an important insight into the structure of a formal ontology: rather
than viewing a formal ontology simply as a list of terms and axioms, knowledge
patterns views it as a collection of abstract, modular theories (the "knowledge
patterns") plus a collection of modeling decisions stating how different
aspects of the world can be modeled using those theories. Knowledge patterns
make both those abstract theories and their mappings to the domain of interest
explicit, thus making modeling decisions clear, and avoiding some of the
ontological confusion that can otherwise arise. In addition, from a
computational perspective, knowledge patterns provide a simple and
computationally efficient mechanism for facilitating knowledge reuse. We
describe the technique and an application built using them, and then critique
its strengths and weaknesses. We conclude that this technique enables us to
better explicate both the structure and modeling decisions made when
constructing a formal axiom-rich ontology.
Related papers
- Mining Frequent Structures in Conceptual Models [2.841785306638839]
We propose a general approach to the problem of discovering frequent structures in conceptual modeling languages.
We use the combination of a frequent subgraph mining algorithm and graph manipulation techniques.
The primary objective is to offer a support facility for language engineers.
arXiv Detail & Related papers (2024-06-11T10:24:02Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Categorical semiotics: Foundations for Knowledge Integration [0.0]
We tackle the challenging task of developing a comprehensive framework for defining and analyzing deep learning architectures.
Our methodology employs graphical structures that resemble Ehresmann's sketches, interpreted within a universe of fuzzy sets.
This approach offers a unified theory that elegantly encompasses both deterministic and non-deterministic neural network designs.
arXiv Detail & Related papers (2024-04-01T23:19:01Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Semantics, Ontology and Explanation [0.0]
We discuss the relation between ontological unpacking and other forms of explanation in philosophy and science.
We also discuss the relation between ontological unpacking and other forms of explanation in the area of Artificial Intelligence.
arXiv Detail & Related papers (2023-04-21T16:54:34Z) - Abstract Interpretation for Generalized Heuristic Search in Model-Based
Planning [50.96320003643406]
Domain-general model-based planners often derive their generality by constructing searchs through the relaxation of symbolic world models.
We illustrate how abstract interpretation can serve as a unifying framework for these abstractions, extending the reach of search to richer world models.
Theses can also be integrated with learning, allowing agents to jumpstart planning in novel world models via abstraction-derived information.
arXiv Detail & Related papers (2022-08-05T00:22:11Z) - On the Complexity of Learning Description Logic Ontologies [14.650545418986058]
Ontologies are a popular way of representing domain knowledge, in particular, knowledge in domains related to life sciences.
We provide a formal specification of the exact and the probably correct learning models from learning theory.
arXiv Detail & Related papers (2021-03-25T09:18:12Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Modelling Compositionality and Structure Dependence in Natural Language [0.12183405753834563]
Drawing on linguistics and set theory, a formalisation of these ideas is presented in the first half of this thesis.
We see how cognitive systems that process language need to have certain functional constraints.
Using the advances of word embedding techniques, a model of relational learning is simulated.
arXiv Detail & Related papers (2020-11-22T17:28:50Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.