Towards Dynamic Consistency Checking in Goal-directed Predicate Answer
Set Programming
- URL: http://arxiv.org/abs/2110.12053v1
- Date: Fri, 22 Oct 2021 20:38:48 GMT
- Title: Towards Dynamic Consistency Checking in Goal-directed Predicate Answer
Set Programming
- Authors: Joaqu\'in Arias, Manuel Carro, Gopal Gupta
- Abstract summary: We present a variation of the top-down evaluation strategy, termed Dynamic Consistency checking.
This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program.
We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP)
- Score: 2.3204178451683264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Goal-directed evaluation of Answer Set Programs is gaining traction thanks to
its amenability to create AI systems that can, due to the evaluation mechanism
used, generate explanations and justifications. s(CASP) is one of these systems
and has been already used to write reasoning systems in several fields. It
provides enhanced expressiveness w.r.t. other ASP systems due to its ability to
use constraints, data structures, and unbound variables natively. However, the
performance of existing s(CASP) implementations is not on par with other ASP
systems: model consistency is checked once models have been generated, in
keeping with the generate-and-test paradigm. In this work, we present a
variation of the top-down evaluation strategy, termed Dynamic Consistency
Checking, which interleaves model generation and consistency checking. This
makes it possible to determine when a literal is not compatible with the
denials associated to the global constraints in the program, prune the current
execution branch, and choose a different alternative. This strategy is
specially (but not exclusively) relevant in problems with a high combinatorial
component. We have experimentally observed speedups of up to 90x w.r.t. the
standard versions of s(CASP).
Related papers
- CoGS: Model Agnostic Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
CoGS is a model-agnostic framework capable of generating counterfactual explanations for classification models.
CoGS offers interpretable and actionable explanations of the changes required to achieve the desired outcome.
arXiv Detail & Related papers (2024-10-30T00:43:01Z) - COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - CoGS: Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
We present the CoGS (Counterfactual Generation with s(CASP)) framework to generate counterfactuals from rule-based machine learning models.
CoGS computes realistic and causally consistent changes to attribute values taking causal dependencies between them into account.
It finds a path from an undesired outcome to a desired one using counterfactuals.
arXiv Detail & Related papers (2024-07-11T04:50:51Z) - From Instructions to Constraints: Language Model Alignment with
Automatic Constraint Verification [70.08146540745877]
We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments.
We propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.
arXiv Detail & Related papers (2024-03-10T22:14:54Z) - Conformance Checking for Pushdown Reactive Systems based on Visibly
Pushdown Languages [0.0]
Testing pushdown reactive systems is deemed important to guarantee a precise and robust software development process.
We show that test suites with a complete fault coverage can be generated using this conformance relation for pushdown reactive systems.
arXiv Detail & Related papers (2023-08-14T14:37:43Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - Conditional independence by typing [30.194205448457385]
A central goal of probabilistic programming languages (PPLs) is to separate modelling from inference.
Conditional independence (CI) relationships among parameters are a crucial aspect of probabilistic models.
We show that for a well-typed program in our system, the distribution it implements is guaranteed to have certain CI-relationships.
arXiv Detail & Related papers (2020-10-22T17:27:22Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - Robust Question Answering Through Sub-part Alignment [53.94003466761305]
We model question answering as an alignment problem.
We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets.
arXiv Detail & Related papers (2020-04-30T09:10:57Z) - Conditional Self-Attention for Query-based Summarization [49.616774159367516]
We propose textitconditional self-attention (CSA), a neural network module designed for conditional dependency modeling.
Experiments on Debatepedia and HotpotQA benchmark datasets show CSA consistently outperforms vanilla Transformer.
arXiv Detail & Related papers (2020-02-18T02:22:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.