The Stable Model Semantics for Higher-Order Logic Programming
- URL: http://arxiv.org/abs/2408.10563v1
- Date: Tue, 20 Aug 2024 06:03:52 GMT
- Title: The Stable Model Semantics for Higher-Order Logic Programming
- Authors: Bart Bogaerts, Angelos Charalambidis, Giannos Chatziagapis, Babis Kostopoulos, Samuele Pollaci, Panos Rondogiannis,
- Abstract summary: We propose a stable model semantics for higher-order logic programs.
Our semantics is developed using Approximation Fixpoint Theory (AFT), a powerful formalism.
We provide examples in different application domains, which demonstrate that higher-order logic programming under the stable model semantics is a powerful and versatile formalism.
- Score: 4.106754434769354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a stable model semantics for higher-order logic programs. Our semantics is developed using Approximation Fixpoint Theory (AFT), a powerful formalism that has successfully been used to give meaning to diverse non-monotonic formalisms. The proposed semantics generalizes the classical two-valued stable model semantics of (Gelfond and Lifschitz 1988) as-well-as the three-valued one of (Przymusinski 1990), retaining their desirable properties. Due to the use of AFT, we also get for free alternative semantics for higher-order logic programs, namely supported model, Kripke-Kleene, and well-founded. Additionally, we define a broad class of stratified higher-order logic programs and demonstrate that they have a unique two-valued higher-order stable model which coincides with the well-founded semantics of such programs. We provide a number of examples in different application domains, which demonstrate that higher-order logic programming under the stable model semantics is a powerful and versatile formalism, which can potentially form the basis of novel ASP systems.
Related papers
- The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Faster Cascades via Speculative Decoding [66.16909847419198]
Cascades and speculative decoding are approaches to improving language models' inference efficiency.
We propose new speculative cascading techniques that implement their deferral rule through speculative execution.
We show that our approach yields better cost quality trade-offs than cascading and speculative decoding baselines.
arXiv Detail & Related papers (2024-05-29T16:55:08Z) - Parameterized Dynamic Logic -- Towards A Cyclic Logical Framework for General Program Specification and Verification [0.174048653626208]
We propose a parameterized dynamic-logic-style' formalism, namely $DL_p$, for specifying and reasoning about general program models.
$DL_p$ provides a flexible verification framework to encompass different dynamic logic theories.
Case studies show how $DL_p$ works for reasoning about different types of program models.
arXiv Detail & Related papers (2024-04-28T07:08:44Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - On Loop Formulas with Variables [2.1955512452222696]
Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding.
We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang.
We extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models.
arXiv Detail & Related papers (2023-07-15T06:20:43Z) - Guiding the PLMs with Semantic Anchors as Intermediate Supervision:
Towards Interpretable Semantic Parsing [57.11806632758607]
We propose to incorporate the current pretrained language models with a hierarchical decoder network.
By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks.
We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines.
arXiv Detail & Related papers (2022-10-04T07:27:29Z) - Training and Inference on Any-Order Autoregressive Models the Right Way [97.39464776373902]
A family of Any-Order Autoregressive Models (AO-ARMs) has shown breakthrough performance in arbitrary conditional tasks.
We identify significant improvements to be made to previous formulations of AO-ARMs.
Our method leads to improved performance with no compromises on tractability.
arXiv Detail & Related papers (2022-05-26T18:00:02Z) - First-Order Context-Specific Likelihood Weighting in Hybrid
Probabilistic Logic Programs [24.503581751619787]
Three types of independencies are important to represent and exploit for scalable inference in hybrid models.
This paper introduces a hybrid probabilistic logic programming language, DC#, which integrates distributional clauses' syntax and semantics principles of Bayesian logic programs.
We also introduce the scalable inference algorithm FO-CS-LW for DC#.
arXiv Detail & Related papers (2022-01-26T20:06:02Z) - ORCHARD: A Benchmark For Measuring Systematic Generalization of
Multi-Hierarchical Reasoning [8.004425059996963]
We show that Transformer and LSTM models surprisingly fail in systematic generalization.
We also show that with increased references between hierarchies, Transformer performs no better than random.
arXiv Detail & Related papers (2021-11-28T03:11:37Z) - A Logical Characterization of the Preferred Models of Logic Programs
with Ordered Disjunction [1.7403133838762446]
We provide a novel, model-theoretic semantics for Logic Programs with Ordered Disjunction (LPODs)
We demonstrate that the proposed approach overcomes the shortcomings of the traditional semantics of LPODs.
New approach can be used to define the semantics of a natural class of logic programs that can have both ordered and classical disjunctions in the heads of clauses.
arXiv Detail & Related papers (2021-08-07T05:36:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.