How to Agree to Disagree: Managing Ontological Perspectives using
Standpoint Logic
- URL: http://arxiv.org/abs/2206.06793v1
- Date: Tue, 14 Jun 2022 12:29:08 GMT
- Title: How to Agree to Disagree: Managing Ontological Perspectives using
Standpoint Logic
- Authors: Luc\'ia G\'omez \'Alvarez, Sebastian Rudolph and Hannes Strass
- Abstract summary: Standpoint Logic is a simple, yet versatile multi-modal logic add-on'' for existing KR languages.
We provide a polytime translation into the standpoint-free version of First-Order Standpoint Logic.
We then establish a similar translation for the very expressive description logic SROIQb_s underlying the OWL 2 DL language.
- Score: 2.9005223064604073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The importance of taking individual, potentially conflicting perspectives
into account when dealing with knowledge has been widely recognised. Many
existing ontology management approaches fully merge knowledge perspectives,
which may require weakening in order to maintain consistency; others represent
the distinct views in an entirely detached way.
As an alternative, we propose Standpoint Logic, a simple, yet versatile
multi-modal logic ``add-on'' for existing KR languages intended for the
integrated representation of domain knowledge relative to diverse, possibly
conflicting standpoints, which can be hierarchically organised, combined and
put in relation to each other.
Starting from the generic framework of First-Order Standpoint Logic (FOSL),
we subsequently focus our attention on the fragment of sentential formulas, for
which we provide a polytime translation into the standpoint-free version. This
result yields decidability and favourable complexities for a variety of highly
expressive decidable fragments of first-order logic. Using some elaborate
encoding tricks, we then establish a similar translation for the very
expressive description logic SROIQb_s underlying the OWL 2 DL ontology
language. By virtue of this result, existing highly optimised OWL reasoners can
be used to provide practical reasoning support for ontology languages extended
by standpoint modelling.
Related papers
- Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning [1.3003982724617653]
Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning.
This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs.
Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge.
arXiv Detail & Related papers (2024-09-25T18:35:45Z) - A Note on an Inferentialist Approach to Resource Semantics [48.65926948745294]
'Inferentialism' is the view that meaning is given in terms of inferential behaviour.
This paper shows how 'inferentialism' enables a versatile and expressive framework for resource semantics.
arXiv Detail & Related papers (2024-05-10T14:13:21Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic [19.476840373850653]
Large language models show hallucinations as their reasoning procedures are unconstrained by logical principles.
We propose LoT (Logical Thoughts), a self-improvement prompting framework that leverages principles rooted in symbolic logic.
Experimental evaluations conducted on language tasks in diverse domains, including arithmetic, commonsense, symbolic, causal inference, and social problems, demonstrate the efficacy of enhanced reasoning by logic.
arXiv Detail & Related papers (2023-09-23T11:21:12Z) - Description Logics Go Second-Order -- Extending EL with Universally
Quantified Concepts [0.0]
We focus on the extension of description logic $mathcalEL$.
We show that for a useful fragment of the extension, the conclusions entailed by the different semantics coincide.
For a slightly smaller, but still useful, fragment, we were also able to show decidability of the extension.
arXiv Detail & Related papers (2023-08-16T09:37:38Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Join-Chain Network: A Logical Reasoning View of the Multi-head Attention
in Transformer [59.73454783958702]
We propose a symbolic reasoning architecture that chains many join operators together to model output logical expressions.
In particular, we demonstrate that such an ensemble of join-chains can express a broad subset of ''tree-structured'' first-order logical expressions, named FOET.
We find that the widely used multi-head self-attention module in transformer can be understood as a special neural operator that implements the union bound of the join operator in probabilistic predicate space.
arXiv Detail & Related papers (2022-10-06T07:39:58Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - Higher-order Logic as Lingua Franca -- Integrating Argumentative
Discourse and Deep Logical Analysis [0.0]
We present an approach towards the deep, pluralistic logical analysis of argumentative discourse.
We use state-of-the-art automated reasoning technology for classical higher-order logic.
arXiv Detail & Related papers (2020-07-02T11:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.