Controlled Agentic Planning & Reasoning for Mechanism Synthesis
- URL: http://arxiv.org/abs/2505.17607v2
- Date: Wed, 08 Oct 2025 13:58:13 GMT
- Title: Controlled Agentic Planning & Reasoning for Mechanism Synthesis
- Authors: João Pedro Gandarela, Thiago Rios, Stefan Menzel, André Freitas,
- Abstract summary: This work presents a dual-agent acllm-based reasoning framework for automated planar mechanism synthesis.<n>From a natural-language task description, the system composes symbolic constraints and equations, generates and parametrises simulation code, and iteratively refines designs via critic-driven feedback.
- Score: 18.8323743697237
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work presents a dual-agent \ac{llm}-based reasoning framework for automated planar mechanism synthesis that tightly couples linguistic specification with symbolic representation and simulation. From a natural-language task description, the system composes symbolic constraints and equations, generates and parametrises simulation code, and iteratively refines designs via critic-driven feedback, including symbolic regression and geometric distance metrics, closing an actionable linguistic/symbolic optimisation loop. To evaluate the approach, we introduce MSynth, a benchmark of analytically defined planar trajectories. Empirically, critic feedback and iterative refinement yield large improvements (up to 90\% on individual tasks) and statistically significant gains per the Wilcoxon signed-rank test. Symbolic-regression prompts provide deeper mechanistic insight primarily when paired with larger models or architectures with appropriate inductive biases (e.g., LRM).
Related papers
- SIGMA: Scalable Spectral Insights for LLM Collapse [51.863164847253366]
We introduce SIGMA (Spectral Inequalities for Gram Matrix Analysis), a unified framework for model collapse.<n>By utilizing benchmarks that deriving and deterministic bounds on the matrix's spectrum, SIGMA provides a mathematically grounded metric to track the contraction of the representation space.<n>We demonstrate that SIGMA effectively captures the transition towards states, offering both theoretical insights into the mechanics of collapse.
arXiv Detail & Related papers (2026-01-06T19:47:11Z) - Bayesian Symbolic Regression via Posterior Sampling [0.0]
Symbolic regression is a powerful tool for discovering governing equations directly from data, but its sensitivity to noise hinders its broader application.<n>This paper introduces a Sequential Monte Carlo framework for Bayesian symbolic regression that approximates the posterior distribution over symbolic expressions.
arXiv Detail & Related papers (2025-12-11T17:38:20Z) - URDF-Anything: Constructing Articulated Objects with 3D Multimodal Language Model [76.08429266631823]
We propose an end-to-end automatic reconstruction framework based on a 3D multimodal large language model (MLLM)<n>URDF-Anything utilizes an autoregressive prediction framework based on point-cloud and text multimodal input to jointly optimize geometric segmentation and kinematic parameter prediction.<n> Experiments on both simulated and real-world datasets demonstrate that our method significantly outperforms existing approaches.
arXiv Detail & Related papers (2025-11-02T13:45:51Z) - CircuitSense: A Hierarchical Circuit System Benchmark Bridging Visual Comprehension and Symbolic Reasoning in Engineering Design Process [29.38618453695266]
Engineering design operates through hierarchical abstraction from system specifications to component implementations.<n>While Multi-modal Large Language Models (MLLMs) excel at natural image tasks, their ability to extract mathematical models from technical diagrams remains unexplored.<n>We present textbfCircuitSense, a benchmark evaluating circuit understanding across this hierarchy through 8,006+ problems spanning component-level schematics to system-level block diagrams.
arXiv Detail & Related papers (2025-09-26T13:32:14Z) - Hierarchical Bayesian Operator-induced Symbolic Regression Trees for Structural Learning of Scientific Expressions [3.8545239266455185]
We develop a hierarchical Bayesian framework for symbolic regression that represents scientific laws as ensembles of tree-structured symbolic expressions with a regularized tree prior.<n>We establish near-minimax rate of Bayesian posterior concentration, providing the first rigorous guarantee in context of symbolic regression.<n> Empirical evaluation demonstrates robust performance of our proposed methodology against state-of-the-art competing modules.
arXiv Detail & Related papers (2025-09-24T02:42:25Z) - Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations [33.04242471060053]
Large Language Models (LLMs) exhibit a robust mastery of syntax when processing and generating text.<n>No comprehensive study has yet established whether a model's probing accuracy reliably predicts its downstream syntactic performance.
arXiv Detail & Related papers (2025-06-20T01:46:50Z) - Sparse Interpretable Deep Learning with LIES Networks for Symbolic Regression [22.345828337550575]
Symbolic regression aims to discover closed-form mathematical expressions that accurately describe data.<n>Existing SR methods often rely on population-based search or autoregressive modeling.<n>We introduce LIES (Logarithm, Identity, Exponential, Sine), a fixed neural network architecture with interpretable primitive activations that are optimized to model symbolic expressions.
arXiv Detail & Related papers (2025-06-09T22:05:53Z) - Exploiting Symbolic Heuristics for the Synthesis of Domain-Specific Temporal Planning Guidance using Reinforcement Learning [51.54559117314768]
Recent work investigated the use of Reinforcement Learning (RL) for the synthesis of guidance to improve the performance of temporal planners.<n>We propose an evolution of this learning and planning framework that focuses on exploiting the information provided by symbolics during both the RL and planning phases.
arXiv Detail & Related papers (2025-05-19T17:19:13Z) - Neurosymbolic artificial intelligence via large language models and coherence-driven inference [3.522062800701924]
We generate sets of propositions that objectively instantiate graphs that support coherence-driven inference.<n>We benchmark the ability of large language models to reconstruct coherence graphs from propositions expressed in natural language.
arXiv Detail & Related papers (2025-02-19T18:53:16Z) - VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.<n>We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Procedural Synthesis of Synthesizable Molecules [22.905205379063148]
Design of synthetically accessible molecules and recommending analogs to unsynthesizable molecules are important problems for accelerating molecular discovery.<n>We reconceptualize both problems using ideas from program synthesis.<n>We create a bilevel framework for reasoning about the space of synthesis pathways.
arXiv Detail & Related papers (2024-08-24T04:32:36Z) - Autoregressive Speech Synthesis without Vector Quantization [135.4776759536272]
We present MELLE, a novel continuous-valued token based language modeling approach for text-to-speech synthesis (TTS)<n>MELLE autoregressively generates continuous mel-spectrogram frames directly from text condition.<n>MELLE mitigates robustness issues by avoiding the inherent flaws of sampling vector-quantized codes.
arXiv Detail & Related papers (2024-07-11T14:36:53Z) - The Buffer Mechanism for Multi-Step Information Reasoning in Language Models [52.77133661679439]
Investigating internal reasoning mechanisms of large language models can help us design better model architectures and training strategies.
In this study, we constructed a symbolic dataset to investigate the mechanisms by which Transformer models employ vertical thinking strategy.
We proposed a random matrix-based algorithm to enhance the model's reasoning ability, resulting in a 75% reduction in the training time required for the GPT-2 model.
arXiv Detail & Related papers (2024-05-24T07:41:26Z) - Class Symbolic Regression: Gotta Fit 'Em All [0.0]
We introduce 'Class Symbolic Regression' (Class SR) a first framework for automatically finding a single analytical functional form that accurately fits multiple datasets.
This hierarchical framework leverages the common constraint that all the members of a single class of physical phenomena follow a common governing law.
We introduce the first Class SR benchmark, comprising a series of synthetic physical challenges specifically designed to evaluate such algorithms.
arXiv Detail & Related papers (2023-12-04T11:45:44Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Composition, Attention, or Both? [8.22379888383833]
We propose a novel architecture called Composition Attention Grammars (CAGs)
We investigate whether composition function and self-attention mechanism can both induce human-like syntactic generalization.
arXiv Detail & Related papers (2022-10-24T05:30:02Z) - Abstract Interpretation for Generalized Heuristic Search in Model-Based
Planning [50.96320003643406]
Domain-general model-based planners often derive their generality by constructing searchs through the relaxation of symbolic world models.
We illustrate how abstract interpretation can serve as a unifying framework for these abstractions, extending the reach of search to richer world models.
Theses can also be integrated with learning, allowing agents to jumpstart planning in novel world models via abstraction-derived information.
arXiv Detail & Related papers (2022-08-05T00:22:11Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.