Investigating Generalization Behaviours of Generative Flow Networks
- URL: http://arxiv.org/abs/2402.05309v1
- Date: Wed, 7 Feb 2024 23:02:53 GMT
- Title: Investigating Generalization Behaviours of Generative Flow Networks
- Authors: Lazar Atanackovic, Emmanuel Bengio
- Abstract summary: We empirically verify some of the hypothesized mechanisms of generalization of GFlowNets.
We find that the functions that GFlowNets learn to approximate have an implicit underlying structure which facilitate generalization.
We also find that GFlowNets are sensitive to being trained offline and off-policy; however, the reward implicitly learned by GFlowNets is robust to changes in the training distribution.
- Score: 3.4642376250601017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Flow Networks (GFlowNets, GFNs) are a generative framework for
learning unnormalized probability mass functions over discrete spaces. Since
their inception, GFlowNets have proven to be useful for learning generative
models in applications where the majority of the discrete space is unvisited
during training. This has inspired some to hypothesize that GFlowNets, when
paired with deep neural networks (DNNs), have favourable generalization
properties. In this work, we empirically verify some of the hypothesized
mechanisms of generalization of GFlowNets. In particular, we find that the
functions that GFlowNets learn to approximate have an implicit underlying
structure which facilitate generalization. We also find that GFlowNets are
sensitive to being trained offline and off-policy; however, the reward
implicitly learned by GFlowNets is robust to changes in the training
distribution.
Related papers
- On Generalization for Generative Flow Networks [54.20924253330039]
Generative Flow Networks (GFlowNets) have emerged as an innovative learning paradigm designed to address the challenge of sampling from an unnormalized probability distribution.
This paper attempts to formalize generalization in the context of GFlowNets, to link generalization with stability, and also to design experiments that assess the capacity of these models to uncover unseen parts of the reward function.
arXiv Detail & Related papers (2024-07-03T13:42:21Z) - Improving GFlowNets with Monte Carlo Tree Search [6.497027864860203]
Recent studies have revealed strong connections between GFlowNets and entropy-regularized reinforcement learning.
We propose to enhance planning capabilities of GFlowNets by applying Monte Carlo Tree Search (MCTS)
Our experiments demonstrate that this approach improves the sample efficiency of GFlowNet training and the generation fidelity of pre-trained GFlowNet models.
arXiv Detail & Related papers (2024-06-19T15:58:35Z) - Evolution Guided Generative Flow Networks [11.609895436955242]
Generative Flow Networks (GFlowNets) learn to sample compositional objects proportional to their rewards.
One big challenge of GFlowNets is training them effectively when dealing with long time horizons and sparse rewards.
We propose Evolution guided generative flow networks (EGFN), a simple but powerful augmentation to the GFlowNets training using Evolutionary algorithms (EA)
arXiv Detail & Related papers (2024-02-03T15:28:53Z) - Expected flow networks in stochastic environments and two-player zero-sum games [63.98522423072093]
Generative flow networks (GFlowNets) are sequential sampling models trained to match a given distribution.
We propose expected flow networks (EFlowNets) which extend GFlowNets to environments.
We show that EFlowNets outperform other GFlowNet formulations in tasks such as protein design.
We then extend the concept of EFlowNets to adversarial environments, proposing adversarial flow networks (AFlowNets) for two-player zero-sum games.
arXiv Detail & Related papers (2023-10-04T12:50:29Z) - Stochastic Generative Flow Networks [89.34644133901647]
Generative Flow Networks (or GFlowNets) learn to sample complex structures through the lens of "inference as control"
Existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with dynamics.
This paper introduces GFlowNets, a new algorithm that extends GFlowNets to environments.
arXiv Detail & Related papers (2023-02-19T03:19:40Z) - Distributional GFlowNets with Quantile Flows [73.73721901056662]
Generative Flow Networks (GFlowNets) are a new family of probabilistic samplers where an agent learns a policy for generating complex structure through a series of decision-making steps.
In this work, we adopt a distributional paradigm for GFlowNets, turning each flow function into a distribution, thus providing more informative learning signals during training.
Our proposed textitquantile matching GFlowNet learning algorithm is able to learn a risk-sensitive policy, an essential component for handling scenarios with risk uncertainty.
arXiv Detail & Related papers (2023-02-11T22:06:17Z) - A theory of continuous generative flow networks [104.93913776866195]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that are trained to sample from unnormalized target distributions.
We present a theory for generalized GFlowNets, which encompasses both existing discrete GFlowNets and ones with continuous or hybrid state spaces.
arXiv Detail & Related papers (2023-01-30T00:37:56Z) - Learning GFlowNets from partial episodes for improved convergence and
stability [56.99229746004125]
Generative flow networks (GFlowNets) are algorithms for training a sequential sampler of discrete objects under an unnormalized target density.
Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory.
Inspired by the TD($lambda$) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB($lambda$), a GFlowNet training objective that can learn from partial action subsequences of varying lengths.
arXiv Detail & Related papers (2022-09-26T15:44:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.