A theory of continuous generative flow networks
- URL: http://arxiv.org/abs/2301.12594v2
- Date: Thu, 25 May 2023 08:44:58 GMT
- Title: A theory of continuous generative flow networks
- Authors: Salem Lahlou, Tristan Deleu, Pablo Lemos, Dinghuai Zhang, Alexandra
Volokhova, Alex Hern\'andez-Garc\'ia, L\'ena N\'ehale Ezzine, Yoshua Bengio,
Nikolay Malkin
- Abstract summary: Generative flow networks (GFlowNets) are amortized variational inference algorithms that are trained to sample from unnormalized target distributions.
We present a theory for generalized GFlowNets, which encompasses both existing discrete GFlowNets and ones with continuous or hybrid state spaces.
- Score: 104.93913776866195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative flow networks (GFlowNets) are amortized variational inference
algorithms that are trained to sample from unnormalized target distributions
over compositional objects. A key limitation of GFlowNets until this time has
been that they are restricted to discrete spaces. We present a theory for
generalized GFlowNets, which encompasses both existing discrete GFlowNets and
ones with continuous or hybrid state spaces, and perform experiments with two
goals in mind. First, we illustrate critical points of the theory and the
importance of various assumptions. Second, we empirically demonstrate how
observations about discrete GFlowNets transfer to the continuous case and show
strong results compared to non-GFlowNet baselines on several previously studied
tasks. This work greatly widens the perspectives for the application of
GFlowNets in probabilistic inference and various modeling settings.
Related papers
- Improving GFlowNets with Monte Carlo Tree Search [6.497027864860203]
Recent studies have revealed strong connections between GFlowNets and entropy-regularized reinforcement learning.
We propose to enhance planning capabilities of GFlowNets by applying Monte Carlo Tree Search (MCTS)
Our experiments demonstrate that this approach improves the sample efficiency of GFlowNet training and the generation fidelity of pre-trained GFlowNet models.
arXiv Detail & Related papers (2024-06-19T15:58:35Z) - Investigating Generalization Behaviours of Generative Flow Networks [3.4642376250601017]
We empirically verify some of the hypothesized mechanisms of generalization of GFlowNets.
We find that the functions that GFlowNets learn to approximate have an implicit underlying structure which facilitate generalization.
We also find that GFlowNets are sensitive to being trained offline and off-policy; however, the reward implicitly learned by GFlowNets is robust to changes in the training distribution.
arXiv Detail & Related papers (2024-02-07T23:02:53Z) - Expected flow networks in stochastic environments and two-player zero-sum games [63.98522423072093]
Generative flow networks (GFlowNets) are sequential sampling models trained to match a given distribution.
We propose expected flow networks (EFlowNets) which extend GFlowNets to environments.
We show that EFlowNets outperform other GFlowNet formulations in tasks such as protein design.
We then extend the concept of EFlowNets to adversarial environments, proposing adversarial flow networks (AFlowNets) for two-player zero-sum games.
arXiv Detail & Related papers (2023-10-04T12:50:29Z) - CFlowNets: Continuous Control with Generative Flow Networks [23.093316128475564]
Generative flow networks (GFlowNets) can be used as an alternative to reinforcement learning for exploratory control tasks.
We propose generative continuous flow networks (CFlowNets) that can be applied to continuous control tasks.
arXiv Detail & Related papers (2023-03-04T14:37:47Z) - Stochastic Generative Flow Networks [89.34644133901647]
Generative Flow Networks (or GFlowNets) learn to sample complex structures through the lens of "inference as control"
Existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with dynamics.
This paper introduces GFlowNets, a new algorithm that extends GFlowNets to environments.
arXiv Detail & Related papers (2023-02-19T03:19:40Z) - GFlowNets and variational inference [64.22223306224903]
This paper builds bridges between two families of probabilistic algorithms:hierarchical variational inference (VI) and generative flow networks (GFlowNets)
We demonstrate that, in certain cases, VI algorithms are equivalent to special cases of GFlowNets in the sense of equality of expected gradients of their learning objectives.
arXiv Detail & Related papers (2022-10-02T17:41:01Z) - Learning GFlowNets from partial episodes for improved convergence and
stability [56.99229746004125]
Generative flow networks (GFlowNets) are algorithms for training a sequential sampler of discrete objects under an unnormalized target density.
Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory.
Inspired by the TD($lambda$) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB($lambda$), a GFlowNet training objective that can learn from partial action subsequences of varying lengths.
arXiv Detail & Related papers (2022-09-26T15:44:24Z) - Generative Flow Networks for Discrete Probabilistic Modeling [118.81967600750428]
We present energy-based generative flow networks (EB-GFN)
EB-GFN is a novel probabilistic modeling algorithm for high-dimensional discrete data.
We show how GFlowNets can approximately perform large-block Gibbs sampling to mix between modes.
arXiv Detail & Related papers (2022-02-03T01:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.