Determining ActionReversibility in STRIPS Using Answer Set and Epistemic
Logic Programming
- URL: http://arxiv.org/abs/2108.05428v1
- Date: Wed, 11 Aug 2021 20:00:34 GMT
- Title: Determining ActionReversibility in STRIPS Using Answer Set and Epistemic
Logic Programming
- Authors: Wolfgang Faber, Michael Morak, and Luk\'a\v{s} Chrpa
- Abstract summary: We call an action reversible when its effects can be reverted by applying other actions, returning to the original state.
We propose several solutions to the computational problem of deciding the reversibility of an action.
- Score: 8.585348089298133
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In the context of planning and reasoning about actions and change, we call an
action reversible when its effects can be reverted by applying other actions,
returning to the original state. Renewed interest in this area has led to
several results in the context of the PDDL language, widely used for describing
planning tasks.
In this paper, we propose several solutions to the computational problem of
deciding the reversibility of an action. In particular, we leverage an existing
translation from PDDL to Answer Set Programming (ASP), and then use several
different encodings to tackle the problem of action reversibility for the
STRIPS fragment of PDDL. For these, we use ASP, as well as Epistemic Logic
Programming (ELP), an extension of ASP with epistemic operators, and compare
and contrast their strengths and weaknesses.
Under consideration for acceptance in TPLP.
Related papers
- PGPO: Enhancing Agent Reasoning via Pseudocode-style Planning Guided Preference Optimization [58.465778756331574]
We propose a pseudocode-style Planning Guided Preference Optimization method called PGPO for effective agent learning.<n>With two planning-oriented rewards, PGPO further enhances LLM agents' ability to generate high-quality P-code Plans.<n>Experiments show that PGPO achieves superior performance on representative agent benchmarks and outperforms the current leading baselines.
arXiv Detail & Related papers (2025-06-02T09:35:07Z) - Automated Refactoring of Non-Idiomatic Python Code: A Differentiated Replication with LLMs [54.309127753635366]
We present the results of a replication study in which we investigate GPT-4 effectiveness in recommending and suggesting idiomatic actions.
Our findings underscore the potential of LLMs to achieve tasks where, in the past, implementing recommenders based on complex code analyses was required.
arXiv Detail & Related papers (2025-01-28T15:41:54Z) - Interactive and Expressive Code-Augmented Planning with Large Language Models [62.799579304821826]
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making.
Recent techniques have sought to structure LLM outputs using control flow and other code-adjacent techniques to improve planning performance.
We propose REPL-Plan, an LLM planning approach that is fully code-expressive and dynamic.
arXiv Detail & Related papers (2024-11-21T04:23:17Z) - Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making [85.24399869971236]
We aim to evaluate Large Language Models (LLMs) for embodied decision making.
Existing evaluations tend to rely solely on a final success rate.
We propose a generalized interface (Embodied Agent Interface) that supports the formalization of various types of tasks.
arXiv Detail & Related papers (2024-10-09T17:59:00Z) - Planning with OWL-DL Ontologies (Extended Version) [6.767885381740952]
We present a black-box that supports the full power expressive DL.
Our main algorithm relies on rewritings of the OWL-mediated planning specifications into PDDL.
We evaluate our implementation on benchmark sets from several domains.
arXiv Detail & Related papers (2024-08-14T13:27:02Z) - Compromising Embodied Agents with Contextual Backdoor Attacks [69.71630408822767]
Large language models (LLMs) have transformed the development of embodied intelligence.
This paper uncovers a significant backdoor security threat within this process.
By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a black-box LLM.
arXiv Detail & Related papers (2024-08-06T01:20:12Z) - Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning [79.38140606606126]
We propose an algorithmic framework that fine-tunes vision-language models (VLMs) with reinforcement learning (RL)
Our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning.
We demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks.
arXiv Detail & Related papers (2024-05-16T17:50:19Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - DisasterResponseGPT: Large Language Models for Accelerated Plan of
Action Development in Disaster Response Scenarios [3.42658286826597]
This study presents DisasterResponseGPT, an algorithm that leverages Large Language Models (LLMs) to generate valid plans of action quickly.
The proposed method generates multiple plans within seconds, which can be further refined following the user's feedback.
Preliminary results indicate that the plans of action developed by DisasterResponseGPT are comparable to human-generated ones while offering greater ease of modification in real-time.
arXiv Detail & Related papers (2023-06-29T19:24:19Z) - Harnessing Incremental Answer Set Solving for Reasoning in
Assumption-Based Argumentation [1.5469452301122177]
Assumption-based argumentation (ABA) is a central structured argumentation formalism.
Recent advances in answer set programming (ASP) enable efficiently solving NP-hard reasoning tasks of ABA in practice.
arXiv Detail & Related papers (2021-08-09T17:34:05Z) - SML: a new Semantic Embedding Alignment Transformer for efficient
cross-lingual Natural Language Inference [71.57324258813674]
The ability of Transformers to perform with precision a variety of tasks such as question answering, Natural Language Inference (NLI) or summarising, have enable them to be ranked as one of the best paradigms to address this kind of tasks at present.
NLI is one of the best scenarios to test these architectures, due to the knowledge required to understand complex sentences and established a relation between a hypothesis and a premise.
In this paper, we propose a new architecture, siamese multilingual transformer, to efficiently align multilingual embeddings for Natural Language Inference.
arXiv Detail & Related papers (2021-03-17T13:23:53Z) - selp: A Single-Shot Epistemic Logic Program Solver [19.562205966997947]
Epistemic Logic Programs (ELPs) are an extension of Answer Set Programming (ASP)
We show that there also exists a direct translation from ELPs into non-ground ASP with bounded arity.
We then implement this encoding method, using recently proposed techniques to handle large, non-ground ASP rules, into the prototype ELP solving system "selp"
arXiv Detail & Related papers (2020-01-04T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.