Awareness Logic: Kripke Lattices as a Middle Ground between Syntactic
and Semantic Models
- URL: http://arxiv.org/abs/2106.12868v1
- Date: Thu, 24 Jun 2021 10:04:44 GMT
- Title: Awareness Logic: Kripke Lattices as a Middle Ground between Syntactic
and Semantic Models
- Authors: Gaia Belardinelli and Rasmus K. Rendsvig
- Abstract summary: We provide a lattice of Kripke models, induced by atom subset inclusion, in which uncertainty and unawareness are separate.
We show our model equivalent to both HMS and FH models by defining transformations which preserve satisfaction of formulas of a language for explicit knowledge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The literature on awareness modeling includes both syntax-free and
syntax-based frameworks. Heifetz, Meier \& Schipper (HMS) propose a lattice
model of awareness that is syntax-free. While their lattice approach is elegant
and intuitive, it precludes the simple option of relying on formal language to
induce lattices, and does not explicitly distinguish uncertainty from
unawareness. Contra this, the most prominent syntax-based solution, the
Fagin-Halpern (FH) model, accounts for this distinction and offers a simple
representation of awareness, but lacks the intuitiveness of the lattice
structure. Here, we combine these two approaches by providing a lattice of
Kripke models, induced by atom subset inclusion, in which uncertainty and
unawareness are separate. We show our model equivalent to both HMS and FH
models by defining transformations between them which preserve satisfaction of
formulas of a language for explicit knowledge, and obtain completeness through
our and HMS' results. Lastly, we prove that the Kripke lattice model can be
shown equivalent to the FH model (when awareness is propositionally determined)
also with respect to the language of the Logic of General Awareness, for which
the FH model where originally proposed.
Related papers
- Derivative-Free Diffusion Manifold-Constrained Gradient for Unified XAI [59.96044730204345]
We introduce Derivative-Free Diffusion Manifold-Constrainted Gradients (FreeMCG)
FreeMCG serves as an improved basis for explainability of a given neural network.
We show that our method yields state-of-the-art results while preserving the essential properties expected of XAI tools.
arXiv Detail & Related papers (2024-11-22T11:15:14Z) - Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency [22.86318578119266]
We introduce a novel framework that scores and selects the best result from k autoformalization candidates based on symbolic equivalence and semantic consistency.
Our experiments on the MATH and miniF2F datasets demonstrate that our approach significantly enhances autoformalization accuracy.
arXiv Detail & Related papers (2024-10-28T11:37:39Z) - Shape Arithmetic Expressions: Advancing Scientific Discovery Beyond Closed-Form Equations [56.78271181959529]
Generalized Additive Models (GAMs) can capture non-linear relationships between variables and targets, but they cannot capture intricate feature interactions.
We propose Shape Expressions Arithmetic ( SHAREs) that fuses GAM's flexible shape functions with the complex feature interactions found in mathematical expressions.
We also design a set of rules for constructing SHAREs that guarantee transparency of the found expressions beyond the standard constraints.
arXiv Detail & Related papers (2024-04-15T13:44:01Z) - Semantic Approach to Quantifying the Consistency of Diffusion Model Image Generation [0.40792653193642503]
We identify the need for an interpretable, quantitative score of the repeatability, or consistency, of image generation in diffusion models.
We propose a semantic approach, using a pairwise mean CLIP score as our semantic consistency score.
arXiv Detail & Related papers (2024-04-12T20:16:03Z) - On the Origins of Linear Representations in Large Language Models [51.88404605700344]
We introduce a simple latent variable model to formalize the concept dynamics of the next token prediction.
Experiments show that linear representations emerge when learning from data matching the latent variable model.
We additionally confirm some predictions of the theory using the LLaMA-2 large language model.
arXiv Detail & Related papers (2024-03-06T17:17:36Z) - CLIP-QDA: An Explainable Concept Bottleneck Model [3.570403495760109]
We introduce an explainable algorithm designed from a multi-modal foundation model, that performs fast and explainable image classification.
Our explanations compete with existing XAI methods while being faster to compute.
arXiv Detail & Related papers (2023-11-30T18:19:47Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z) - Awareness Logic: A Kripke-based Rendition of the Heifetz-Meier-Schipper
Model [0.0]
We present a model based on a lattice of Kripke models, induced by atom subset inclusion, in which uncertainty and unawareness are separate.
We show the models to be equivalent by defining transformations between them which preserve formula satisfaction.
arXiv Detail & Related papers (2020-12-23T21:24:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.