Training-Free Guidance Beyond Differentiability: Scalable Path Steering with Tree Search in Diffusion and Flow Models
- URL: http://arxiv.org/abs/2502.11420v1
- Date: Mon, 17 Feb 2025 04:20:39 GMT
- Title: Training-Free Guidance Beyond Differentiability: Scalable Path Steering with Tree Search in Diffusion and Flow Models
- Authors: Yingqing Guo, Yukang Yang, Hui Yuan, Mengdi Wang,
- Abstract summary: This work focuses on training-free guidance addressing challenges from non-differentiable objectives and discrete data distributions.<n>We propose an algorithmic framework TreeG: Tree Search-Based Path Steering Guidance.<n>Our experiments show that TreeG consistently outperforms the top guidance baselines in symbolic music generation, small molecule generation, and enhancer DNA design.
- Score: 39.13996838237359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training-free guidance enables controlled generation in diffusion and flow models, but most existing methods assume differentiable objectives and rely on gradients. This work focuses on training-free guidance addressing challenges from non-differentiable objectives and discrete data distributions. We propose an algorithmic framework TreeG: Tree Search-Based Path Steering Guidance, applicable to both continuous and discrete settings in diffusion and flow models. TreeG offers a unified perspective on training-free guidance: proposing candidates for the next step, evaluating candidates, and selecting the best to move forward, enhanced by a tree search mechanism over active paths or parallelizing exploration. We comprehensively investigate the design space of TreeG over the candidate proposal module and the evaluation function, instantiating TreeG into three novel algorithms. Our experiments show that TreeG consistently outperforms the top guidance baselines in symbolic music generation, small molecule generation, and enhancer DNA design, all of which involve non-differentiable challenges. Additionally, we identify an inference-time scaling law showing TreeG's scalability in inference-time computation.
Related papers
- Decision Tree Induction Through LLMs via Semantically-Aware Evolution [53.0367886783772]
We propose an evolutionary optimization method for decision tree induction based on genetic programming (GP)
Our key innovation is the integration of semantic priors and domain-specific knowledge about the search space into the algorithm.
This is operationalized through novel genetic operators that work with structured natural language prompts.
arXiv Detail & Related papers (2025-03-18T12:52:03Z) - Dynamic Search for Inference-Time Alignment in Diffusion Models [87.35944312589424]
We frame inference-time alignment in diffusion as a search problem and propose Dynamic Search for Diffusion (DSearch)
DSearch subsamples from denoising processes and approximates intermediate node rewards.
It also dynamically adjusts beam width and tree expansion to efficiently explore high-reward generations.
arXiv Detail & Related papers (2025-03-03T20:32:05Z) - ViTree: Single-path Neural Tree for Step-wise Interpretable Fine-grained
Visual Categorization [56.37520969273242]
We introduce ViTree, a novel approach for fine-grained visual categorization.
By traversing the tree paths, ViTree effectively selects patches from transformer-processed features to highlight informative local regions.
This patch and path selectivity enhances model interpretability of ViTree, enabling better insights into the model's inner workings.
arXiv Detail & Related papers (2024-01-30T14:32:25Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Fast and Effective GNN Training with Linearized Random Spanning Trees [20.73637495151938]
We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - Latent Optimal Paths by Gumbel Propagation for Variational Bayesian Dynamic Programming [12.249274845167415]
We show the equivalence of the Gibbs distribution to a message-passing algorithm by the properties of the Gumbel distribution.
We propose the BDP-VAE which captures structured sparse optimal paths as latent variables.
arXiv Detail & Related papers (2023-06-05T03:47:59Z) - GFlowCausal: Generative Flow Networks for Causal Discovery [27.51595081346858]
We propose a novel approach to learning a Directed Acyclic Graph (DAG) from observational data called GFlowCausal.
GFlowCausal aims to learn the best policy to generate high-reward DAGs by sequential actions with probabilities proportional to predefined rewards.
We conduct extensive experiments on both synthetic and real datasets, and results show the proposed approach to be superior and also performs well in a large-scale setting.
arXiv Detail & Related papers (2022-10-15T04:07:39Z) - AdaProp: Learning Adaptive Propagation for Graph Neural Network based
Knowledge Graph Reasoning [43.06729402877713]
An important design component of GNN-based reasoning methods is called the propagation path.
We learn an adaptive propagation path in order to filter out irrelevant entities while preserving promising targets.
Our method is powerful, efficient, and semantic-aware.
arXiv Detail & Related papers (2022-05-30T14:00:59Z) - Social Interpretable Tree for Pedestrian Trajectory Prediction [75.81745697967608]
We propose a tree-based method, termed as Social Interpretable Tree (SIT), to address this multi-modal prediction task.
A path in the tree from the root to leaf represents an individual possible future trajectory.
Despite the hand-crafted tree, the experimental results on ETH-UCY and Stanford Drone datasets demonstrate that our method is capable of matching or exceeding the performance of state-of-the-art methods.
arXiv Detail & Related papers (2022-05-26T12:18:44Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.