Combining Constraint Programming Reasoning with Large Language Model Predictions
- URL: http://arxiv.org/abs/2407.13490v1
- Date: Thu, 18 Jul 2024 13:15:55 GMT
- Title: Combining Constraint Programming Reasoning with Large Language Model Predictions
- Authors: Florian RĂ©gin, Elisabetta De Maria, Alexandre Bonlarron,
- Abstract summary: Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation.
This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation due to CP's struggle with implementing "meaning'' and ML's difficulty with structural constraints. This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP. The LLM handles word generation and meaning, while CP manages structural constraints. This approach builds on GenCP, an improved version of On-the-fly Constraint Programming Search (OTFS) using LLM-generated domains. Compared to Beam Search (BS), a standard NLP method, this combined approach (GenCP with LLM) is faster and produces better results, ensuring all constraints are satisfied. This fusion of CP and ML presents new possibilities for enhancing text generation under constraints.
Related papers
- Interactive and Expressive Code-Augmented Planning with Large Language Models [62.799579304821826]
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making.
Recent techniques have sought to structure LLM outputs using control flow and other code-adjacent techniques to improve planning performance.
We propose REPL-Plan, an LLM planning approach that is fully code-expressive and dynamic.
arXiv Detail & Related papers (2024-11-21T04:23:17Z) - Realtime Generation of Streamliners with Large Language Models [20.580584407211486]
This paper presents the novel method for generating streamliners in constraint programming using Large Language Models (LLMs)
StreamLLM generates streamliners for problems specified in the MiniZinc constraint programming language.
arXiv Detail & Related papers (2024-08-16T14:17:26Z) - Intertwining CP and NLP: The Generation of Unreasonably Constrained Sentences [49.86129209397701]
This paper presents the Constraints First Framework to remedy this issue.
It is solved by a constraint programming method that combines linguistic properties with more classical constraints.
The effectiveness of this approach is demonstrated by tackling a new more tediously constrained text generation problem.
arXiv Detail & Related papers (2024-06-15T17:40:49Z) - From LLMs to Actions: Latent Codes as Bridges in Hierarchical Robot Control [58.72492647570062]
We introduce our method -- Learnable Latent Codes as Bridges (LCB) -- as an alternate architecture to overcome limitations.
We find that methodoutperforms baselines that leverage pure language as the interface layer on tasks that require reasoning and multi-step behaviors.
arXiv Detail & Related papers (2024-05-08T04:14:06Z) - Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models [49.74036826946397]
This study investigates constrained text generation for large language models (LLMs)
Our research mainly focuses on mainstream open-source LLMs, categorizing constraints into lexical, structural, and relation-based types.
Results illuminate LLMs' capacity and deficiency to incorporate constraints and provide insights for future developments in constrained text generation.
arXiv Detail & Related papers (2023-10-25T03:58:49Z) - Tractable Control for Autoregressive Language Generation [82.79160918147852]
We propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models.
We show that GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation.
Our work opens up new avenues for controlling large language models and also motivates the development of more expressive TPMs.
arXiv Detail & Related papers (2023-04-15T00:19:44Z) - On Codex Prompt Engineering for OCL Generation: An Empirical Study [10.184056098238765]
The Object Constraint Language (OCL) is a declarative language that adds constraints and object query expressions to MOF models.
Recent advancements in LLMs, such as GPT-3, have shown their capability in many NLP tasks.
We investigate the reliability of OCL constraints generated by Codex from natural language specifications.
arXiv Detail & Related papers (2023-03-28T18:50:51Z) - Language Generation via Combinatorial Constraint Satisfaction: A Tree
Search Enhanced Monte-Carlo Approach [24.897552102098324]
We present a framework to allow specification of constraints for sentence generation.
We propose TSMH, an efficient method to generate high likelihood sentences with respect to a pre-trained language model.
Our approach is highly flexible, requires no task-specific training, and leverages efficient constraint satisfaction solving techniques.
arXiv Detail & Related papers (2020-11-24T19:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.