Aggregate Semantics for Propositional Answer Set Programs
- URL: http://arxiv.org/abs/2109.08662v1
- Date: Fri, 17 Sep 2021 17:38:55 GMT
- Title: Aggregate Semantics for Propositional Answer Set Programs
- Authors: Mario Alviano, Wolfgang Faber, Martin Gebser
- Abstract summary: We present and compare the main aggregate semantics that have been proposed for propositional ASP programs.
We highlight crucial properties such as computational complexity and expressive power, and outline the capabilities and limitations of different approaches.
- Score: 14.135212040150389
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Answer Set Programming (ASP) emerged in the late 1990ies as a paradigm for
Knowledge Representation and Reasoning. The attractiveness of ASP builds on an
expressive high-level modeling language along with the availability of powerful
off-the-shelf solving systems. While the utility of incorporating aggregate
expressions in the modeling language has been realized almost simultaneously
with the inception of the first ASP solving systems, a general semantics of
aggregates and its efficient implementation have been long-standing challenges.
Aggregates have been proposed and widely used in database systems, and also in
the deductive database language Datalog, which is one of the main precursors of
ASP. The use of aggregates was, however, still restricted in Datalog (by either
disallowing recursion or only allowing monotone aggregates), while several ways
to integrate unrestricted aggregates evolved in the context of ASP. In this
survey, we pick up at this point of development by presenting and comparing the
main aggregate semantics that have been proposed for propositional ASP
programs. We highlight crucial properties such as computational complexity and
expressive power, and outline the capabilities and limitations of different
approaches by illustrative examples.
Related papers
- ACE: A Generative Cross-Modal Retrieval Framework with Coarse-To-Fine Semantic Modeling [53.97609687516371]
We propose a pioneering generAtive Cross-modal rEtrieval framework (ACE) for end-to-end cross-modal retrieval.
ACE achieves state-of-the-art performance in cross-modal retrieval and outperforms the strong baselines on Recall@1 by 15.27% on average.
arXiv Detail & Related papers (2024-06-25T12:47:04Z) - LangSuitE: Planning, Controlling and Interacting with Large Language Models in Embodied Text Environments [70.91258869156353]
We introduce LangSuitE, a versatile and simulation-free testbed featuring 6 representative embodied tasks in textual embodied worlds.
Compared with previous LLM-based testbeds, LangSuitE offers adaptability to diverse environments without multiple simulation engines.
We devise a novel chain-of-thought (CoT) schema, EmMem, which summarizes embodied states w.r.t. history information.
arXiv Detail & Related papers (2024-06-24T03:36:29Z) - UQE: A Query Engine for Unstructured Databases [71.49289088592842]
We investigate the potential of Large Language Models to enable unstructured data analytics.
We propose a new Universal Query Engine (UQE) that directly interrogates and draws insights from unstructured data collections.
arXiv Detail & Related papers (2024-06-23T06:58:55Z) - Towards Compositionally Generalizable Semantic Parsing in Large Language Models: A Survey [0.0]
We present a literature survey geared at recent advances in analysis, methods, and evaluation schemes for compositional generalization.
This type of generalization is particularly relevant to the semantic parsing community for applications such as task-oriented dialogue.
arXiv Detail & Related papers (2024-04-15T10:44:58Z) - A Large-Scale Evaluation of Speech Foundation Models [110.95827399522204]
We establish the Speech processing Universal PERformance Benchmark (SUPERB) to study the effectiveness of the foundation model paradigm for speech.
We propose a unified multi-tasking framework to address speech processing tasks in SUPERB using a frozen foundation model followed by task-specialized, lightweight prediction heads.
arXiv Detail & Related papers (2024-04-15T00:03:16Z) - Extending Answer Set Programming with Rational Numbers [0.6526824510982802]
This paper proposes an extension of ASP in which non-integers are approximated to rational numbers, fully granting and declarativity.
We provide a well-defined semantics for the ASP-Core-2 standard extended with rational numbers and an implementation thereof.
arXiv Detail & Related papers (2023-12-07T12:11:25Z) - Generalizing Level Ranking Constraints for Monotone and Convex
Aggregates [0.0]
In answer set programming (ASP), answer sets capture solutions to search problems of interest.
One viable implementation strategy is provided by translation-based ASP.
We take level ranking constraints into reconsideration, aiming at their generalizations to cover aggregate-based extensions of ASP.
arXiv Detail & Related papers (2023-08-30T09:04:39Z) - Optimization Techniques for Unsupervised Complex Table Reasoning via Self-Training Framework [5.351873055148804]
Self-training framework generates diverse synthetic data with complex logic.
We optimize the procedure using a "Table-Text Manipulator" to handle joint table-text reasoning scenarios.
UCTRST achieves above 90% of the supervised model performance on different tasks and domains.
arXiv Detail & Related papers (2022-12-20T09:15:03Z) - On the Foundations of Grounding in Answer Set Programming [4.389457090443418]
We provide a comprehensive elaboration of the theoretical foundations of variable instantiation, or grounding, in Answer Set Programming (ASP)
We introduce a formal characterization of grounding algorithms in terms of (fixed point) operators.
A major role is played by dedicated well-founded operators whose associated models provide semantic guidance for delineating the result of grounding.
arXiv Detail & Related papers (2021-08-10T16:23:49Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - SDA: Improving Text Generation with Self Data Augmentation [88.24594090105899]
We propose to improve the standard maximum likelihood estimation (MLE) paradigm by incorporating a self-imitation-learning phase for automatic data augmentation.
Unlike most existing sentence-level augmentation strategies, our method is more general and could be easily adapted to any MLE-based training procedure.
arXiv Detail & Related papers (2021-01-02T01:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.