Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive
Synthesis using Large Language Models and Satisfiability Solving
- URL: http://arxiv.org/abs/2309.16436v1
- Date: Thu, 28 Sep 2023 13:40:50 GMT
- Title: Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive
Synthesis using Large Language Models and Satisfiability Solving
- Authors: Sumit Kumar Jha, Susmit Jha, Patrick Lincoln, Nathaniel D. Bastian,
Alvaro Velasquez, Rickard Ewetz, Sandeep Neema
- Abstract summary: Generative large language models (LLMs) with instruct training can generate human-like responses to prompts.
Despite their improved accuracy, these models are still known to produce factually incorrect or contextually inappropriate results.
This limitation makes it difficult to use these models to synthesize formal artifacts that are used in safety-critical applications.
- Score: 23.426866969743525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative large language models (LLMs) with instruct training such as GPT-4
can follow human-provided instruction prompts and generate human-like responses
to these prompts. Apart from natural language responses, they have also been
found to be effective at generating formal artifacts such as code, plans, and
logical specifications from natural language prompts. Despite their remarkably
improved accuracy, these models are still known to produce factually incorrect
or contextually inappropriate results despite their syntactic coherence - a
phenomenon often referred to as hallucination. This limitation makes it
difficult to use these models to synthesize formal artifacts that are used in
safety-critical applications. Unlike tasks such as text summarization and
question-answering, bugs in code, plan, and other formal artifacts produced by
LLMs can be catastrophic. We posit that we can use the satisfiability modulo
theory (SMT) solvers as deductive reasoning engines to analyze the generated
solutions from the LLMs, produce counterexamples when the solutions are
incorrect, and provide that feedback to the LLMs exploiting the dialog
capability of instruct-trained LLMs. This interaction between inductive LLMs
and deductive SMT solvers can iteratively steer the LLM to generate the correct
response. In our experiments, we use planning over the domain of blocks as our
synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo,
Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our
method allows the user to communicate the planning problem in natural language;
even the formulation of queries to SMT solvers is automatically generated from
natural language. Thus, the proposed technique can enable non-expert users to
describe their problems in natural language, and the combination of LLMs and
SMT solvers can produce provably correct solutions.
Related papers
- Language Agents Meet Causality -- Bridging LLMs and Causal World Models [50.79984529172807]
We propose a framework that integrates causal representation learning with large language models.
This framework learns a causal world model, with causal variables linked to natural language expressions.
We evaluate the framework on causal inference and planning tasks across temporal scales and environmental complexities.
arXiv Detail & Related papers (2024-10-25T18:36:37Z) - Logic-Enhanced Language Model Agents for Trustworthy Social Simulations [3.5083201638203154]
This study focuses on decision-making in game-theoretic scenarios as a model of human interaction.
We introduce the Logic-Enhanced Language Model Agents (LELMA) framework, a novel approach to enhance the trustworthiness of social simulations.
arXiv Detail & Related papers (2024-08-28T18:25:35Z) - Case2Code: Learning Inductive Reasoning with Synthetic Data [105.89741089673575]
We propose a textbfCase2Code task by exploiting the expressiveness and correctness of programs.
We first evaluate representative LLMs on the synthesized Case2Code task and demonstrate that the Case-to-code induction is challenging for LLMs.
Experimental results show that such induction training benefits not only in distribution Case2Code performance but also enhances various coding abilities of trained LLMs.
arXiv Detail & Related papers (2024-07-17T11:35:00Z) - Frugal LMs Trained to Invoke Symbolic Solvers Achieve
Parameter-Efficient Arithmetic Reasoning [36.8749786658624]
Large Language Models (LLM) exhibit zero-shot mathematical reasoning capacity as a behavior emergent with scale.
We show that small LMs can achieve reasonable arithmetic reasoning if arithmetic word problems are posed as a formalize-then-solve task.
arXiv Detail & Related papers (2023-12-09T13:20:49Z) - LM-Polygraph: Uncertainty Estimation for Language Models [71.21409522341482]
Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of large language models (LLMs)
We introduce LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python.
It introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores.
arXiv Detail & Related papers (2023-11-13T15:08:59Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Leveraging Large Language Models to Generate Answer Set Programs [5.532477732693001]
Large language models (LLMs) have demonstrated exceptional performance in various natural language processing tasks.
This paper proposes a neuro-symbolic method that combines the strengths of large language models and answer set programming.
arXiv Detail & Related papers (2023-07-15T03:40:55Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z) - Translating Natural Language to Planning Goals with Large-Language
Models [19.738395237639136]
Recent large language models (LLMs) have demonstrated remarkable performance on a variety of natural language processing (NLP) tasks.
Our central question is whether LLMs are able to translate goals specified in natural language to a structured planning language.
Our empirical results on GPT 3.5 variants show that LLMs are much better suited towards translation rather than planning.
arXiv Detail & Related papers (2023-02-10T09:17:52Z) - PAL: Program-aided Language Models [112.94785609781503]
We present Program-Aided Language models (PaL) to understand natural language problems.
PaL offloads the solution step to a programmatic runtime such as a Python interpreter.
We set new state-of-the-art results in all 12 benchmarks.
arXiv Detail & Related papers (2022-11-18T18:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.