An efficient solver for ASP(Q)
- URL: http://arxiv.org/abs/2305.10021v1
- Date: Wed, 17 May 2023 07:59:53 GMT
- Title: An efficient solver for ASP(Q)
- Authors: Wolfgang Faber, Giuseppe Mazzotta, and Francesco Ricca
- Abstract summary: ASP(Q) extends Answer Set Programming (ASP) to allow for declarative and modular modeling of problems from the hierarchy.
We present a new implementation that builds on the ideas of qasp and features both a more efficient encoding procedure and new optimized encodings of ASP(Q) programs in QBF.
- Score: 4.198045959496482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Answer Set Programming with Quantifiers ASP(Q) extends Answer Set Programming
(ASP) to allow for declarative and modular modeling of problems from the entire
polynomial hierarchy. The first implementation of ASP(Q), called qasp, was
based on a translation to Quantified Boolean Formulae (QBF) with the aim of
exploiting the well-developed and mature QBF-solving technology. However, the
implementation of the QBF encoding employed in qasp is very general and might
produce formulas that are hard to evaluate for existing QBF solvers because of
the large number of symbols and sub-clauses. In this paper, we present a new
implementation that builds on the ideas of qasp and features both a more
efficient encoding procedure and new optimized encodings of ASP(Q) programs in
QBF. The new encodings produce smaller formulas (in terms of the number of
quantifiers, variables, and clauses) and result in a more efficient evaluation
process. An algorithm selection strategy automatically combines several
QBF-solving back-ends to further increase performance. An experimental
analysis, conducted on known benchmarks, shows that the new system outperforms
qasp.
Related papers
- Enhancing Question Answering Precision with Optimized Vector Retrieval and Instructions [1.2425910171551517]
Question-answering (QA) is an important application of Information Retrieval (IR) and language models.
We propose an innovative approach to improve QA task performances by integrating optimized vector retrievals and instruction methodologies.
arXiv Detail & Related papers (2024-11-01T21:14:04Z) - Effective Instruction Parsing Plugin for Complex Logical Query Answering on Knowledge Graphs [51.33342412699939]
Knowledge Graph Query Embedding (KGQE) aims to embed First-Order Logic (FOL) queries in a low-dimensional KG space for complex reasoning over incomplete KGs.
Recent studies integrate various external information (such as entity types and relation context) to better capture the logical semantics of FOL queries.
We propose an effective Query Instruction Parsing (QIPP) that captures latent query patterns from code-like query instructions.
arXiv Detail & Related papers (2024-10-27T03:18:52Z) - Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [50.485788083202124]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Quantifying over Optimum Answer Sets [6.390468088226495]
ASP(Q) lacks a method for encoding modeling in an elegant and compact way.
We propose an extension of ASP(Q) in which component programs may contain weak constraints.
We showcase the modeling capabilities of the new formalism through various application scenarios.
arXiv Detail & Related papers (2024-08-14T17:53:13Z) - Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [83.90988015005934]
Uncertainty quantification (UQ) is a critical component of machine learning (ML) applications.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines.
We conduct a large-scale empirical investigation of UQ and normalization techniques across nine tasks, and identify the most promising approaches.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - Recursive Visual Programming [53.76415744371285]
We propose Recursive Visual Programming (RVP), which simplifies generated routines, provides more efficient problem solving, and can manage more complex data structures.
We show RVP's efficacy through extensive experiments on benchmarks including VSR, COVR, GQA, and NextQA.
arXiv Detail & Related papers (2023-12-04T17:27:24Z) - Sequential Query Encoding For Complex Query Answering on Knowledge
Graphs [31.40820604209387]
We propose sequential query encoding (SQE) as an alternative to encode queries for knowledge graph (KG) reasoning.
SQE first uses a search-based algorithm to linearize the computational graph to a sequence of tokens and then uses a sequence encoder to compute its vector representation.
Despite its simplicity, SQE demonstrates state-of-the-art neural query encoding performance on FB15k, FB15k-237, and NELL.
arXiv Detail & Related papers (2023-02-25T16:33:53Z) - Efficient Multiple Constraint Acquisition [1.3706331473063877]
Constraint acquisition systems such as QuAcq and MultiAcq can assist non-expert users to model their problems as constraint networks.
We present a technique that boosts the performance of constraint acquisition by reducing the number of queries significantly.
We then turn our attention to query generation which is a significant but rather overlooked part of the acquisition process.
arXiv Detail & Related papers (2021-09-13T12:42:16Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Generating Diverse and Consistent QA pairs from Contexts with
Information-Maximizing Hierarchical Conditional VAEs [62.71505254770827]
We propose a conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts.
Our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training.
arXiv Detail & Related papers (2020-05-28T08:26:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.