A Unified Density Operator View of Flow Control and Merging
- URL: http://arxiv.org/abs/2602.08012v1
- Date: Sun, 08 Feb 2026 15:27:28 GMT
- Title: A Unified Density Operator View of Flow Control and Merging
- Authors: Riccardo De Santi, Malte Franke, Ya-Ping Hsieh, Andreas Krause,
- Abstract summary: We introduce a unifying probability-space framework that subsumes both as limit cases, and enables reward-guided flow merging.<n>We also introduce Reward-Guided Flow Merging (RFM), a mirror-descent scheme that reduces reward-guided flow merging to a sequence of standard fine-tuning problems.<n>We provide first-of-their-kind theoretical guarantees for reward-guided and pure flow merging via RFM.
- Score: 37.902481322917396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent progress in large-scale flow and diffusion models raised two fundamental algorithmic challenges: (i) control-based reward adaptation of pre-trained flows, and (ii) integration of multiple models, i.e., flow merging. While current approaches address them separately, we introduce a unifying probability-space framework that subsumes both as limit cases, and enables reward-guided flow merging, allowing principled, task-aware combination of multiple pre-trained flows (e.g., merging priors while maximizing drug-discovery utilities). Our formulation renders possible to express a rich family of operators over generative models densities, including intersection (e.g., to enforce safety), union (e.g., to compose diverse models), interpolation (e.g., for discovery), their reward-guided counterparts, as well as complex logical expressions via generative circuits. Next, we introduce Reward-Guided Flow Merging (RFM), a mirror-descent scheme that reduces reward-guided flow merging to a sequence of standard fine-tuning problems. Then, we provide first-of-their-kind theoretical guarantees for reward-guided and pure flow merging via RFM. Ultimately, we showcase the capabilities of the proposed method on illustrative settings providing visually interpretable insights, and apply our method to high-dimensional de-novo molecular design and low-energy conformer generation.
Related papers
- Flow Density Control: Generative Optimization Beyond Entropy-Regularized Fine-Tuning [59.11663802446183]
Flow and diffusion generative models can be adapted to optimize task-specific objectives while preserving prior information.<n>We introduce Flow Density Control (FDC), a simple algorithm that reduces this complex problem to a specific sequence of simpler fine-tuning tasks.<n>We derive convergence guarantees for the proposed scheme under realistic assumptions by leveraging recent understanding of mirror flows.
arXiv Detail & Related papers (2025-11-27T17:19:01Z) - Optimal Control Meets Flow Matching: A Principled Route to Multi-Subject Fidelity [35.95129874095729]
Text-to-image (T2I) models excel on single-entity prompts but struggle with multi-subject descriptions.<n>We introduce the first theoretical framework with principled optimizable objective for steering sampling dynamics toward multi-subject fidelity.
arXiv Detail & Related papers (2025-10-02T17:59:58Z) - A Theory of Multi-Agent Generative Flow Networks [65.53605277612444]
We propose a theoretical framework for multi-agent generative flow networks (MA-GFlowNets)<n>MA-GFlowNets can be applied to multiple agents to generate objects collaboratively through a series of joint actions.<n>Joint Flow training is based on a local-global principle allowing to train a collection of (local) GFN as a unique (global) GFN.
arXiv Detail & Related papers (2025-09-24T04:01:21Z) - FUDOKI: Discrete Flow-based Unified Understanding and Generation via Kinetic-Optimal Velocities [76.46448367752944]
multimodal large language models (MLLMs) unify visual understanding and image generation within a single framework.<n>Most existing MLLMs rely on autore (AR) architectures, which impose inherent limitations on future development.<n>We introduce FUDOKI, a unified multimodal model purely based on discrete flow matching.
arXiv Detail & Related papers (2025-05-26T15:46:53Z) - FlowDPS: Flow-Driven Posterior Sampling for Inverse Problems [51.99765487172328]
Posterior sampling for inverse problem solving can be effectively achieved using flows.<n>Flow-Driven Posterior Sampling (FlowDPS) outperforms state-of-the-art alternatives.
arXiv Detail & Related papers (2025-03-11T07:56:14Z) - Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization [14.320131946691268]
We propose an easy-to-use and theoretically sound fine-tuning method for flow-based generative models.<n>By introducing an online rewardweighting mechanism, our approach guides the model to prioritize high-reward regions in the data manifold.<n>Our method achieves optimal policy convergence while allowing controllable trade-offs between reward and diversity.
arXiv Detail & Related papers (2025-02-09T22:45:15Z) - Multi-Agent Continuous Control with Generative Flow Networks [23.07260731600958]
Generative Flow Networks (GFlowNets) aim to generate diverse trajectories from a distribution in which the final states of the trajectories are proportional to the reward.
We propose a novel Multi-Agent generative Continuous Flow Networks (MACFN) method to enable multiple agents to perform cooperative exploration.
arXiv Detail & Related papers (2024-08-13T14:12:03Z) - Combining Wasserstein-1 and Wasserstein-2 proximals: robust manifold learning via well-posed generative flows [6.799748192975493]
We formulate well-posed continuous-time generative flows for learning distributions supported on low-dimensional manifold.
We show that the Wasserstein-1 proximal operator regularize $f$-divergences so that singular distributions can be compared.
We also show that the Wasserstein-2 proximal operator regularize the paths of the generative flows by adding an optimal transport cost.
arXiv Detail & Related papers (2024-07-16T16:34:31Z) - Generative Flows with Invertible Attentions [135.23766216657745]
We introduce two types of invertible attention mechanisms for generative flow models.
We exploit split-based attention mechanisms to learn the attention weights and input representations on every two splits of flow feature maps.
Our method provides invertible attention modules with tractable Jacobian determinants, enabling seamless integration of it at any positions of the flow-based models.
arXiv Detail & Related papers (2021-06-07T20:43:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.