RLAD: Training LLMs to Discover Abstractions for Solving Reasoning Problems
- URL: http://arxiv.org/abs/2510.02263v1
- Date: Thu, 02 Oct 2025 17:44:23 GMT
- Title: RLAD: Training LLMs to Discover Abstractions for Solving Reasoning Problems
- Authors: Yuxiao Qu, Anikait Singh, Yoonho Lee, Amrith Setlur, Ruslan Salakhutdinov, Chelsea Finn, Aviral Kumar,
- Abstract summary: We train models to be capable of proposing multiple abstractions given a problem, followed by RL that incentivizes building a solution.<n>This results in a two-player RL training paradigm, abbreviated as RLAD, that jointly trains an abstraction generator and a solution generator.<n>We show that allocating more test-time compute to generating abstractions is more beneficial for performance than generating more solutions at large test budgets.
- Score: 98.98963933669751
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reasoning requires going beyond pattern matching or memorization of solutions to identify and implement "algorithmic procedures" that can be used to deduce answers to hard problems. Doing so requires realizing the most relevant primitives, intermediate results, or shared procedures, and building upon them. While RL post-training on long chains of thought ultimately aims to uncover this kind of algorithmic behavior, most reasoning traces learned by large models fail to consistently capture or reuse procedures, instead drifting into verbose and degenerate exploration. To address more effective reasoning, we introduce reasoning abstractions: concise natural language descriptions of procedural and factual knowledge that guide the model toward learning successful reasoning. We train models to be capable of proposing multiple abstractions given a problem, followed by RL that incentivizes building a solution while using the information provided by these abstractions. This results in a two-player RL training paradigm, abbreviated as RLAD, that jointly trains an abstraction generator and a solution generator. This setup effectively enables structured exploration, decouples learning signals of abstraction proposal and solution generation, and improves generalization to harder problems. We also show that allocating more test-time compute to generating abstractions is more beneficial for performance than generating more solutions at large test budgets, illustrating the role of abstractions in guiding meaningful exploration.
Related papers
- Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents [54.73952501784257]
Humans learn abstractions and use them to plan efficiently to quickly generalize across tasks.<n>We introduce TheoryCoder-2, a new large language model (LLM) agent that actively learns reusable abstractions.<n>We conduct experiments on diverse environments, including BabyAI, Minihack and VGDL games like Sokoban.
arXiv Detail & Related papers (2026-01-31T23:01:51Z) - An Introduction to Deep Reinforcement and Imitation Learning [0.0]
This document introduces DRL and DIL in the context of embodied agents.<n>It is self-contained, presenting all necessary mathematical and machine learning concepts as they are needed.
arXiv Detail & Related papers (2025-12-08T21:21:01Z) - Thinker: Training LLMs in Hierarchical Thinking for Deep Search via Multi-Turn Interaction [57.67217258741752]
Thinker is a hierarchical thinking model for deep search through multi-turn interaction.<n>It decomposes complex problems into independently solvable sub-problems.<n> dependencies between sub-problems are passed as parameters via these logical functions.
arXiv Detail & Related papers (2025-11-11T07:48:45Z) - AR$^2$: Adversarial Reinforcement Learning for Abstract Reasoning in Large Language Models [12.484537674896908]
We propose AR$2$ (Adversarial Reinforcement Learning for Abstract Reasoning), a novel framework explicitly designed to enhance the abstraction abilities of large language models (LLMs)<n>AR$2$ employs a teacher model to transform kernel problems into narrative-rich, challenging descriptions without changing their fundamental logic.<n>A student coding model is trained to solve these complex narrative problems by extracting their underlying computational kernels.
arXiv Detail & Related papers (2025-08-27T17:26:44Z) - AbstRaL: Augmenting LLMs' Reasoning by Reinforcing Abstract Thinking [38.8730008545358]
Large language models (LLMs) often lack robustness in their reasoning.<n>Our approach focuses on "abstracting" reasoning problems.<n>We find that this abstraction process is better acquired through reinforcement learning (RL) than just supervised fine-tuning.
arXiv Detail & Related papers (2025-06-09T13:34:50Z) - Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning [93.00629872970364]
Reinforcement learning (RL) has become the dominant paradigm for improving the performance of language models on complex reasoning tasks.<n>We introduce SPARKLE, a fine-grained analytic framework to dissect the effects of RL across three key dimensions.<n>We study whether difficult problems -- those yielding no RL signals and mixed-quality reasoning traces -- can still be effectively used for training.
arXiv Detail & Related papers (2025-06-05T07:53:59Z) - Disentangling Memory and Reasoning Ability in Large Language Models [97.26827060106581]
We propose a new inference paradigm that decomposes the complex inference process into two distinct and clear actions.<n>Our experiment results show that this decomposition improves model performance and enhances the interpretability of the inference process.
arXiv Detail & Related papers (2024-11-20T17:55:38Z) - Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - Exploiting Multiple Abstractions in Episodic RL via Reward Shaping [23.61187560936501]
We consider a linear hierarchy of abstraction layers of the Markov Decision Process (MDP) underlying the target domain.
We propose a novel form of Reward Shaping where the solution obtained at the abstract level is used to offer rewards to the more concrete MDP.
arXiv Detail & Related papers (2023-02-28T13:22:29Z) - A Theory of Abstraction in Reinforcement Learning [18.976500531441346]
In this dissertation, I present a theory of abstraction in reinforcement learning.
I first offer three desiderata for functions that carry out the process of abstraction.
I then present a suite of new algorithms and analysis that clarify how agents can learn to abstract according to these desiderata.
arXiv Detail & Related papers (2022-03-01T12:46:28Z) - Learning Abstract Models for Strategic Exploration and Fast Reward
Transfer [85.19766065886422]
We learn an accurate Markov Decision Process (MDP) over abstract states to avoid compounding errors.
Our approach achieves strong results on three of the hardest Arcade Learning Environment games.
We can reuse the learned abstract MDP for new reward functions, achieving higher reward in 1000x fewer samples than model-free methods trained from scratch.
arXiv Detail & Related papers (2020-07-12T03:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.