Combining Constraint Programming Reasoning with Large Language Model Predictions
- URL: http://arxiv.org/abs/2407.13490v1
- Date: Thu, 18 Jul 2024 13:15:55 GMT
- Title: Combining Constraint Programming Reasoning with Large Language Model Predictions
- Authors: Florian Régin, Elisabetta De Maria, Alexandre Bonlarron,
- Abstract summary: Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation.
This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation due to CP's struggle with implementing "meaning'' and ML's difficulty with structural constraints. This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP. The LLM handles word generation and meaning, while CP manages structural constraints. This approach builds on GenCP, an improved version of On-the-fly Constraint Programming Search (OTFS) using LLM-generated domains. Compared to Beam Search (BS), a standard NLP method, this combined approach (GenCP with LLM) is faster and produces better results, ensuring all constraints are satisfied. This fusion of CP and ML presents new possibilities for enhancing text generation under constraints.
Related papers
- "Don't Do That!": Guiding Embodied Systems through Large Language Model-based Constraint Generation [40.61171036032532]
Large language models (LLMs) have spurred interest in robotic navigation that incorporates complex constraints from natural language into the planning problem.<n>In this paper, we propose a constraint generation framework that uses LLMs to translate constraints into Python functions.<n>We show that these LLM-generated functions accurately describe even complex mathematical constraints, and apply them to point cloud representations with traditional search algorithms.
arXiv Detail & Related papers (2025-06-04T22:47:53Z) - Large Language Model Meets Constraint Propagation [42.54350568915404]
GenCP combines predictions with Constraint Programming (CP) reasoning to generate fluent text.<n>We improve GenCP by integrating Masked Language Models (MLMs) for domain generation, which allows constraint propagation.<n>Our evaluation on COLLIE benchmarks demonstrates that incorporating domain preview via bidirectional calls significantly improves GenCP's performance.
arXiv Detail & Related papers (2025-05-29T21:18:12Z) - Cost-Effective Text Clustering with Large Language Models [15.179854529085544]
This paper proposes TECL, a cost-effective framework that taps into the feedback from large language models for accurate text clustering.
Under the hood, TECL adopts our EdgeLLM or TriangleLLM to construct must-link/cannot-link constraints for text pairs.
Our experiments on multiple benchmark datasets exhibit that TECL consistently and considerably outperforms existing solutions in unsupervised text clustering.
arXiv Detail & Related papers (2025-04-22T06:57:49Z) - Pel, A Programming Language for Orchestrating AI Agents [1.223779595809275]
Pel is a novel programming language designed to bridge the gap between function/tool calling and direct code generation.<n>Inspired by the strengths of Lisp, Elixir, Gleam, and Haskell, Pel provides a syntactically simple, homoiconic, and semantically rich platform.<n>Key features include a powerful piping mechanism for linear composition, first-class closures enabling easy partial application and functional patterns, built-in support for natural language conditions evaluated by LLMs, and an advanced Read-Eval-Print-Loop (REPeL) with Common Lisp-style restarts and LLM-powered helper agents for automated
arXiv Detail & Related papers (2025-04-03T18:46:53Z) - Interactive and Expressive Code-Augmented Planning with Large Language Models [62.799579304821826]
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making.
Recent techniques have sought to structure LLM outputs using control flow and other code-adjacent techniques to improve planning performance.
We propose REPL-Plan, an LLM planning approach that is fully code-expressive and dynamic.
arXiv Detail & Related papers (2024-11-21T04:23:17Z) - Realtime Generation of Streamliners with Large Language Models [20.580584407211486]
This paper presents the novel method for generating streamliners in constraint programming using Large Language Models (LLMs)
StreamLLM generates streamliners for problems specified in the MiniZinc constraint programming language.
arXiv Detail & Related papers (2024-08-16T14:17:26Z) - Intertwining CP and NLP: The Generation of Unreasonably Constrained Sentences [49.86129209397701]
This paper presents the Constraints First Framework to remedy this issue.
It is solved by a constraint programming method that combines linguistic properties with more classical constraints.
The effectiveness of this approach is demonstrated by tackling a new more tediously constrained text generation problem.
arXiv Detail & Related papers (2024-06-15T17:40:49Z) - From LLMs to Actions: Latent Codes as Bridges in Hierarchical Robot Control [58.72492647570062]
We introduce our method -- Learnable Latent Codes as Bridges (LCB) -- as an alternate architecture to overcome limitations.
We find that methodoutperforms baselines that leverage pure language as the interface layer on tasks that require reasoning and multi-step behaviors.
arXiv Detail & Related papers (2024-05-08T04:14:06Z) - Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models [49.74036826946397]
This study investigates constrained text generation for large language models (LLMs)
Our research mainly focuses on mainstream open-source LLMs, categorizing constraints into lexical, structural, and relation-based types.
Results illuminate LLMs' capacity and deficiency to incorporate constraints and provide insights for future developments in constrained text generation.
arXiv Detail & Related papers (2023-10-25T03:58:49Z) - Tractable Control for Autoregressive Language Generation [82.79160918147852]
We propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models.
We show that GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation.
Our work opens up new avenues for controlling large language models and also motivates the development of more expressive TPMs.
arXiv Detail & Related papers (2023-04-15T00:19:44Z) - On Codex Prompt Engineering for OCL Generation: An Empirical Study [10.184056098238765]
The Object Constraint Language (OCL) is a declarative language that adds constraints and object query expressions to MOF models.
Recent advancements in LLMs, such as GPT-3, have shown their capability in many NLP tasks.
We investigate the reliability of OCL constraints generated by Codex from natural language specifications.
arXiv Detail & Related papers (2023-03-28T18:50:51Z) - Language Generation via Combinatorial Constraint Satisfaction: A Tree
Search Enhanced Monte-Carlo Approach [24.897552102098324]
We present a framework to allow specification of constraints for sentence generation.
We propose TSMH, an efficient method to generate high likelihood sentences with respect to a pre-trained language model.
Our approach is highly flexible, requires no task-specific training, and leverages efficient constraint satisfaction solving techniques.
arXiv Detail & Related papers (2020-11-24T19:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.