Intermediate Languages Matter: Formal Languages and LLMs affect Neurosymbolic Reasoning
- URL: http://arxiv.org/abs/2509.04083v1
- Date: Thu, 04 Sep 2025 10:25:50 GMT
- Title: Intermediate Languages Matter: Formal Languages and LLMs affect Neurosymbolic Reasoning
- Authors: Alexander Beiser, David Penz, Nysret Musliu,
- Abstract summary: Large language models (LLMs) achieve astonishing results on a wide range of tasks, but their formal reasoning ability still lags behind.<n>A promising approach is Neurosymbolic LLM reasoning.<n>This paper demonstrates that one previously overlooked factor is the choice of the formal language.
- Score: 46.02329567377897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) achieve astonishing results on a wide range of tasks. However, their formal reasoning ability still lags behind. A promising approach is Neurosymbolic LLM reasoning. It works by using LLMs as translators from natural to formal languages and symbolic solvers for deriving correct results. Still, the contributing factors to the success of Neurosymbolic LLM reasoning remain unclear. This paper demonstrates that one previously overlooked factor is the choice of the formal language. We introduce the intermediate language challenge: selecting a suitable formal language for neurosymbolic reasoning. By comparing four formal languages across three datasets and seven LLMs, we show that the choice of formal language affects both syntactic and semantic reasoning capabilities. We also discuss the varying effects across different LLMs.
Related papers
- Understanding Syllogistic Reasoning in LLMs from Formal and Natural Language Perspectives [0.5161531917413708]
We study syllogistic reasoning in LLMs from the logical and natural language perspectives.<n>We use 14 large language models and investigate their syllogistic reasoning capabilities in terms of symbolic inferences as well as natural language understanding.
arXiv Detail & Related papers (2025-12-14T09:50:10Z) - Do Large Language Models Excel in Complex Logical Reasoning with Formal Language? [20.53475791645822]
Large Language Models (LLMs) have been shown to achieve breakthrough performance on complex logical reasoning tasks.<n>This paper aims to conduct a comprehensive evaluation of LLMs across various logical reasoning problems utilizing formal languages.
arXiv Detail & Related papers (2025-05-22T17:57:23Z) - When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners [111.50503126693444]
We show that language-specific ablation consistently boosts multilingual reasoning performance.<n>Compared to post-training, our training-free ablation achieves comparable or superior results with minimal computational overhead.
arXiv Detail & Related papers (2025-05-21T08:35:05Z) - On the Thinking-Language Modeling Gap in Large Language Models [68.83670974539108]
We show that there is a significant gap between the modeling of languages and thoughts.<n>We propose a new prompt technique termed Language-of-Thoughts (LoT) to demonstrate and alleviate this gap.
arXiv Detail & Related papers (2025-05-19T09:31:52Z) - Intermediate Languages Matter: Formal Choice Drives Neurosymbolic LLM Reasoning [50.99811144731619]
We show that the choice of formal language affects both the syntactic and the semantic reasoning capability.<n>We conclude that on average, context-aware encodings help LLMs to reason, while there is no apparent effect of using comments or markdown syntax.
arXiv Detail & Related papers (2025-02-24T14:49:52Z) - The LLM Language Network: A Neuroscientific Approach for Identifying Causally Task-Relevant Units [16.317199232071232]
Large language models (LLMs) exhibit remarkable capabilities on not just language tasks, but also various tasks that are not linguistic in nature.<n>In the human brain, neuroscience has identified a core language system that selectively and causally supports language processing.<n>We identify language-selective units within 18 popular LLMs, using the same localization approach that is used in neuroscience.
arXiv Detail & Related papers (2024-11-04T17:09:10Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - Meta-Reasoning: Semantics-Symbol Deconstruction for Large Language Models [34.22393697176282]
We propose the Meta-Reasoning to broaden symbolic methods' applicability and adaptability in the real world.
This method empowers LLMs to deconstruct reasoning-independent semantic information into generic symbolic representations.
We conduct extensive experiments on more than ten datasets encompassing conventional reasoning tasks like arithmetic, symbolic, and logical reasoning, and the more complex interactive reasoning tasks like theory-of-mind reasoning.
arXiv Detail & Related papers (2023-06-30T17:38:10Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Logic-LM: Empowering Large Language Models with Symbolic Solvers for
Faithful Logical Reasoning [101.26814728062065]
Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems.
This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving.
arXiv Detail & Related papers (2023-05-20T22:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.