Inference-time Scaling of Diffusion Models through Classical Search
- URL: http://arxiv.org/abs/2505.23614v2
- Date: Sun, 05 Oct 2025 15:58:19 GMT
- Title: Inference-time Scaling of Diffusion Models through Classical Search
- Authors: Xiangcheng Zhang, Haowei Lin, Haotian Ye, James Zou, Jianzhu Ma, Yitao Liang, Yilun Du,
- Abstract summary: We propose a general framework that orchestrates local and global search to efficiently navigate the generative space.<n>We evaluate our approach on a range of challenging domains, including planning, offline reinforcement learning, and image generation.<n>These results show that classical search provides a principled and practical foundation for inference-time scaling in diffusion models.
- Score: 90.77272206228946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical search algorithms have long underpinned modern artificial intelligence. In this work, we tackle the challenge of inference-time control in diffusion models -- adapting generated outputs to meet diverse test-time objectives -- using principles from classical search. We propose a general framework that orchestrates local and global search to efficiently navigate the generative space. It employs a theoretically grounded local search via annealed Langevin MCMC and performs compute-efficient global exploration using breadth-first and depth-first tree search. We evaluate our approach on a range of challenging domains, including planning, offline reinforcement learning, and image generation. Across all tasks, we observe significant gains in both performance and efficiency. These results show that classical search provides a principled and practical foundation for inference-time scaling in diffusion models. Project page at https://diffusion-inference-scaling.github.io/.
Related papers
- Learning-Based Hashing for ANN Search: Foundations and Early Advances [0.5279475826661642]
Hashing-based methods provide an efficient solution by mapping high-dimensional data into compact binary codes.<n>Over the past two decades, a substantial body of work has explored learning to hash, where projection and quantisation functions are optimised from data.<n>This article offers a foundational survey of early learning-based hashing methods, with an emphasis on the core ideas that shaped the field.
arXiv Detail & Related papers (2025-10-05T09:59:56Z) - DeepSearch: Overcome the Bottleneck of Reinforcement Learning with Verifiable Rewards via Monte Carlo Tree Search [53.27052683356095]
We present DeepSearch, a framework that integrates Monte Carlo Tree Search directly into RLVR training.<n>In contrast to existing methods that rely on tree search only at inference, DeepSearch embeds structured search into the training loop.<n>Our contributions include: (1) a global frontier selection strategy that prioritizes promising nodes across the search tree, (2) selection with entropy-based guidance that identifies confident paths for supervision, and (3) adaptive replay buffer training with solution caching for efficiency.
arXiv Detail & Related papers (2025-09-29T20:00:29Z) - LLM-First Search: Self-Guided Exploration of the Solution Space [29.780554400938335]
Large Language Models (LLMs) have demonstrated remarkable improvements in reasoning and planning through increased test-time compute.<n>We propose textbfLLM-First Search (LFS), a novel textitLLM Self-Guided Search method.
arXiv Detail & Related papers (2025-06-05T16:27:49Z) - AXIOM: Learning to Play Games in Minutes with Expanding Object-Centric Models [41.429595107023125]
AXIOM is a novel architecture that integrates a minimal yet expressive set of core priors about object-centric dynamics and interactions.<n>It combines the usual data efficiency and interpretability of Bayesian approaches with the across-task generalization usually associated with DRL.<n> AXIOM masters various games within only 10,000 interaction steps, with both a small number of parameters compared to DRL, and without the computational expense of gradient-based optimization.
arXiv Detail & Related papers (2025-05-30T16:46:20Z) - Dynamic Search for Inference-Time Alignment in Diffusion Models [87.35944312589424]
We frame inference-time alignment in diffusion as a search problem and propose Dynamic Search for Diffusion (DSearch)<n>DSearch subsamples from denoising processes and approximates intermediate node rewards.<n>It also dynamically adjusts beam width and tree expansion to efficiently explore high-reward generations.
arXiv Detail & Related papers (2025-03-03T20:32:05Z) - Enhancing LLM Reasoning with Reward-guided Tree Search [95.06503095273395]
o1-like reasoning approach is challenging, and researchers have been making various attempts to advance this open area of research.<n>We present a preliminary exploration into enhancing the reasoning abilities of LLMs through reward-guided tree search algorithms.
arXiv Detail & Related papers (2024-11-18T16:15:17Z) - Self-Supervised Learning for Covariance Estimation [3.04585143845864]
We propose to globally learn a neural network that will then be applied locally at inference time.
The architecture is based on the popular attention mechanism.
It can be pre-trained as a foundation model and then be repurposed for various downstream tasks, e.g., adaptive target detection in radar or hyperspectral imagery.
arXiv Detail & Related papers (2024-03-13T16:16:20Z) - Differentiable Tree Search Network [14.972768001402898]
Differentiable Tree Search Network (D-TSN) is a novel neural network architecture that significantly strengthens the inductive bias.
D-TSN employs a learned world model to conduct a fully differentiable online search.
We demonstrate that D-TSN outperforms popular model-free and model-based baselines.
arXiv Detail & Related papers (2024-01-22T02:33:38Z) - Lightweight Diffusion Models with Distillation-Based Block Neural
Architecture Search [55.41583104734349]
We propose to automatically remove structural redundancy in diffusion models with our proposed Diffusion Distillation-based Block-wise Neural Architecture Search (NAS)
Given a larger pretrained teacher, we leverage DiffNAS to search for the smallest architecture which can achieve on-par or even better performance than the teacher.
Different from previous block-wise NAS methods, DiffNAS contains a block-wise local search strategy and a retraining strategy with a joint dynamic loss.
arXiv Detail & Related papers (2023-11-08T12:56:59Z) - Neural Algorithmic Reasoning Without Intermediate Supervision [21.852775399735005]
We focus on learning neural algorithmic reasoning only from the input-output pairs without appealing to the intermediate supervision.
We build a self-supervised objective that can regularise intermediate computations of the model without access to the algorithm trajectory.
We demonstrate that our approach is competitive to its trajectory-supervised counterpart on tasks from the CLRSic Algorithmic Reasoning Benchmark.
arXiv Detail & Related papers (2023-06-23T09:57:44Z) - $\eta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - An Approach for Combining Multimodal Fusion and Neural Architecture
Search Applied to Knowledge Tracing [6.540879944736641]
We propose a sequential model based optimization approach that combines multimodal fusion and neural architecture search within one framework.
We evaluate our methods on two public real datasets showing the discovered model is able to achieve superior performance.
arXiv Detail & Related papers (2021-11-08T13:43:46Z) - Efficient Model Performance Estimation via Feature Histories [27.008927077173553]
An important step in the task of neural network design is the evaluation of a model's performance.
In this work, we use the evolution history of features of a network during the early stages of training to build a proxy classifier.
We show that our method can be combined with multiple search algorithms to find better solutions to a wide range of tasks.
arXiv Detail & Related papers (2021-03-07T20:41:57Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.