Refining Gelfond Rationality Principle Towards More Comprehensive Foundational Principles for Answer Set Semantics
- URL: http://arxiv.org/abs/2507.01833v1
- Date: Wed, 02 Jul 2025 15:47:54 GMT
- Title: Refining Gelfond Rationality Principle Towards More Comprehensive Foundational Principles for Answer Set Semantics
- Authors: Yi-Dong Shen, Thomas Eiter,
- Abstract summary: Non-monotonic logic programming is the basis for a declarative problem solving paradigm known as answer set programming (ASP)<n>We evolve the Gelfond answer set (GAS) principles for answer set construction by refining the Gelfond's principle to well-supportedness.<n>We also define new answer set semantics in terms of the refined GAS principles.
- Score: 21.386181640954725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-monotonic logic programming is the basis for a declarative problem solving paradigm known as answer set programming (ASP). Departing from the seminal definition by Gelfond and Lifschitz in 1988 for simple normal logic programs, various answer set semantics have been proposed for extensions. We consider two important questions: (1) Should the minimal model property, constraint monotonicity and foundedness as defined in the literature be mandatory conditions for an answer set semantics in general? (2) If not, what other properties could be considered as general principles for answer set semantics? We address the two questions. First, it seems that the three aforementioned conditions may sometimes be too strong, and we illustrate with examples that enforcing them may exclude expected answer sets. Second, we evolve the Gelfond answer set (GAS) principles for answer set construction by refining the Gelfond's rationality principle to well-supportedness, minimality w.r.t. negation by default and minimality w.r.t. epistemic negation. The principle of well-supportedness guarantees that every answer set is constructible from if-then rules obeying a level mapping and is thus free of circular justification, while the two minimality principles ensure that the formalism minimizes knowledge both at the level of answer sets and of world views. Third, to embody the refined GAS principles, we extend the notion of well-supportedness substantially to answer sets and world views, respectively. Fourth, we define new answer set semantics in terms of the refined GAS principles. Fifth, we use the refined GAS principles as an alternative baseline to intuitively assess the existing answer set semantics. Finally, we analyze the computational complexity.
Related papers
- Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles [24.448749292993234]
The Helpful, Honest, and Harmless (HHH) principle is a framework for aligning AI systems with human values.<n>We argue for an adaptive interpretation of the HHH principle and propose a reference framework for its adaptation to diverse scenarios.<n>This work offers practical insights for improving AI alignment, ensuring that HHH principles remain both grounded and operationally effective in real-world AI deployment.
arXiv Detail & Related papers (2025-02-09T22:41:24Z) - Revisiting Vacuous Reduct Semantics for Abstract Argumentation (Extended Version) [8.010966370223985]
We consider the notion of a vacuous reduct semantics for abstract argumentation frameworks.
We give a systematic overview on vacuous reduct semantics resulting from combining different admissibility-based and conflict-free semantics.
arXiv Detail & Related papers (2024-08-26T07:50:49Z) - Distilling Reasoning Ability from Large Language Models with Adaptive Thinking [54.047761094420174]
Chain of thought finetuning (cot-finetuning) aims to endow small language models (SLM) with reasoning ability to improve their performance towards specific tasks.
Most existing cot-finetuning methods adopt a pre-thinking mechanism, allowing the SLM to generate a rationale before providing an answer.
This mechanism enables SLM to analyze and think about complex questions, but it also makes answer correctness highly sensitive to minor errors in rationale.
We propose a robust post-thinking mechanism to generate answers before rationale.
arXiv Detail & Related papers (2024-04-14T07:19:27Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - A Unified View on Forgetting and Strong Equivalence Notions in Answer
Set Programming [14.342696862884704]
We introduce a novel relativized equivalence notion, which is able to capture all related notions from the literature.
We then introduce an operator that combines projection and a relaxation of (SP)-forgetting to obtain the relativized simplifications.
arXiv Detail & Related papers (2023-12-13T09:05:48Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Unifying different notions of quantum incompatibility into a strict
hierarchy of resource theories of communication [60.18814584837969]
We introduce the notion of q-compatibility, which unifies different notions of POVMs, channels, and instruments incompatibility.
We are able to pinpoint exactly what each notion of incompatibility consists of, in terms of information-theoretic resources.
arXiv Detail & Related papers (2022-11-16T21:33:31Z) - Rediscovering Argumentation Principles Utilizing Collective Attacks [26.186171927678874]
We extend the principle-based approach to Argumentation Frameworks with Collective Attacks (SETAFs)
Our analysis shows that investigating principles based on decomposing the given SETAF (e.g. directionality or SCC-recursiveness) poses additional challenges in comparison to usual AFs.
arXiv Detail & Related papers (2022-05-06T11:41:23Z) - Generalized Inverse Planning: Learning Lifted non-Markovian Utility for
Generalizable Task Representation [83.55414555337154]
In this work, we study learning such utility from human demonstrations.
We propose a new quest, Generalized Inverse Planning, for utility learning in this domain.
We outline a computational framework, Maximum Entropy Inverse Planning (MEIP), that learns non-Markovian utility and associated concepts in a generative manner.
arXiv Detail & Related papers (2020-11-12T21:06:26Z) - Constraint Monotonicity, Epistemic Splitting and Foundedness Could in
General Be Too Strong in Answer Set Programming [32.60523531309687]
We consider the notions of subjective constraint monotonicity, epistemic splitting, and foundedness as main criteria respectively intuitions to compare different answer set semantics.
In this note, we demonstrate on some examples that they may be too strong in general and may exclude some desired answer sets respectively world views.
arXiv Detail & Related papers (2020-10-01T04:03:11Z) - Recursive Rules with Aggregation: A Simple Unified Semantics [0.6662800021628273]
This paper describes a unified semantics for recursion with aggregation.
We present a formal definition of the semantics, prove important properties of the semantics, and compare with prior semantics.
We show that our semantics is simple and matches the desired results in all cases.
arXiv Detail & Related papers (2020-07-26T04:42:44Z) - Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation [49.671882751569534]
We develop SynQG, a set of transparent syntactic rules which transform declarative sentences into question-answer pairs.
We utilize PropBank argument descriptions and VerbNet state predicates to incorporate shallow semantic content.
In order to improve syntactic fluency and eliminate grammatically incorrect questions, we employ back-translation over the output of these syntactic rules.
arXiv Detail & Related papers (2020-04-18T19:57:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.