Fuzzy Propositional Formulas under the Stable Model Semantics
- URL: http://arxiv.org/abs/2506.12804v1
- Date: Sun, 15 Jun 2025 10:38:56 GMT
- Title: Fuzzy Propositional Formulas under the Stable Model Semantics
- Authors: Joohyung Lee, Yi Wang,
- Abstract summary: We define a stable model semantics for fuzzy propositional formulas, which generalizes both fuzzy propositional logic and the stable model semantics of classical propositional formulas.<n>The syntax of the language is the same as the syntax of fuzzy propositional logic, but its semantics distinguishes stable models from non-stable models.
- Score: 7.052794276153519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We define a stable model semantics for fuzzy propositional formulas, which generalizes both fuzzy propositional logic and the stable model semantics of classical propositional formulas. The syntax of the language is the same as the syntax of fuzzy propositional logic, but its semantics distinguishes stable models from non-stable models. The generality of the language allows for highly configurable nonmonotonic reasoning for dynamic domains involving graded truth degrees. We show that several properties of Boolean stable models are naturally extended to this many-valued setting, and discuss how it is related to other approaches to combining fuzzy logic and the stable model semantics.
Related papers
- Explaining Datasets in Words: Statistical Models with Natural Language Parameters [66.69456696878842]
We introduce a family of statistical models -- including clustering, time series, and classification models -- parameterized by natural language predicates.<n>We apply our framework to a wide range of problems: taxonomizing user chat dialogues, characterizing how they evolve across time, finding categories where one language model is better than the other.
arXiv Detail & Related papers (2024-09-13T01:40:20Z) - The Stable Model Semantics for Higher-Order Logic Programming [4.106754434769354]
We propose a stable model semantics for higher-order logic programs.
Our semantics is developed using Approximation Fixpoint Theory (AFT), a powerful formalism.
We provide examples in different application domains, which demonstrate that higher-order logic programming under the stable model semantics is a powerful and versatile formalism.
arXiv Detail & Related papers (2024-08-20T06:03:52Z) - A Primer for Preferential Non-Monotonic Propositional Team Logics [0.0]
We show that team-based propositional logics naturally give rise to cumulative non-monotonic entailment relations.
Motivated by the non-classical interpretation of disjunction in team semantics, we give a precise characterization for preferential models for propositional dependence logic.
arXiv Detail & Related papers (2024-05-11T09:53:15Z) - Shape Arithmetic Expressions: Advancing Scientific Discovery Beyond Closed-Form Equations [56.78271181959529]
Generalized Additive Models (GAMs) can capture non-linear relationships between variables and targets, but they cannot capture intricate feature interactions.
We propose Shape Expressions Arithmetic ( SHAREs) that fuses GAM's flexible shape functions with the complex feature interactions found in mathematical expressions.
We also design a set of rules for constructing SHAREs that guarantee transparency of the found expressions beyond the standard constraints.
arXiv Detail & Related papers (2024-04-15T13:44:01Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - On Loop Formulas with Variables [2.1955512452222696]
Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding.
We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang.
We extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models.
arXiv Detail & Related papers (2023-07-15T06:20:43Z) - Language Model Cascades [72.18809575261498]
Repeated interactions at test-time with a single model, or the composition of multiple models together, further expands capabilities.
Cases with control flow and dynamic structure require techniques from probabilistic programming.
We formalize several existing techniques from this perspective, including scratchpads / chain of thought, verifiers, STaR, selection-inference, and tool use.
arXiv Detail & Related papers (2022-07-21T07:35:18Z) - Awareness Logic: Kripke Lattices as a Middle Ground between Syntactic
and Semantic Models [0.0]
We provide a lattice of Kripke models, induced by atom subset inclusion, in which uncertainty and unawareness are separate.
We show our model equivalent to both HMS and FH models by defining transformations which preserve satisfaction of formulas of a language for explicit knowledge.
arXiv Detail & Related papers (2021-06-24T10:04:44Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z) - Exploring End-to-End Differentiable Natural Logic Modeling [21.994060519995855]
We explore end-to-end trained differentiable models that integrate natural logic with neural networks.
The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information.
arXiv Detail & Related papers (2020-11-08T18:18:15Z) - Measuring Association Between Labels and Free-Text Rationales [60.58672852655487]
In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.
We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales.
We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established.
arXiv Detail & Related papers (2020-10-24T03:40:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.