Learning GFlowNets from partial episodes for improved convergence and
stability
- URL: http://arxiv.org/abs/2209.12782v3
- Date: Sat, 3 Jun 2023 17:39:46 GMT
- Title: Learning GFlowNets from partial episodes for improved convergence and
stability
- Authors: Kanika Madan, Jarrid Rector-Brooks, Maksym Korablyov, Emmanuel Bengio,
Moksh Jain, Andrei Nica, Tom Bosc, Yoshua Bengio, Nikolay Malkin
- Abstract summary: Generative flow networks (GFlowNets) are algorithms for training a sequential sampler of discrete objects under an unnormalized target density.
Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory.
Inspired by the TD($lambda$) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB($lambda$), a GFlowNet training objective that can learn from partial action subsequences of varying lengths.
- Score: 56.99229746004125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative flow networks (GFlowNets) are a family of algorithms for training
a sequential sampler of discrete objects under an unnormalized target density
and have been successfully used for various probabilistic modeling tasks.
Existing training objectives for GFlowNets are either local to states or
transitions, or propagate a reward signal over an entire sampling trajectory.
We argue that these alternatives represent opposite ends of a gradient
bias-variance tradeoff and propose a way to exploit this tradeoff to mitigate
its harmful effects. Inspired by the TD($\lambda$) algorithm in reinforcement
learning, we introduce subtrajectory balance or SubTB($\lambda$), a GFlowNet
training objective that can learn from partial action subsequences of varying
lengths. We show that SubTB($\lambda$) accelerates sampler convergence in
previously studied and new environments and enables training GFlowNets in
environments with longer action sequences and sparser reward landscapes than
what was possible before. We also perform a comparative analysis of stochastic
gradient dynamics, shedding light on the bias-variance tradeoff in GFlowNet
training and the advantages of subtrajectory balance.
Related papers
- On Divergence Measures for Training GFlowNets [3.7277730514654555]
Generative Flow Networks (GFlowNets) are amortized inference models designed to sample from unnormalized distributions over composable objects.
Traditionally, the training procedure for GFlowNets seeks to minimize the expected log-squared difference between a proposal (forward policy) and a target (backward policy) distribution.
We review four divergence measures, namely, Renyi-$alpha$'s, Tsallis-$alpha$'s, reverse and forward KL's, and design statistically efficient estimators for their gradients in the context of training GFlowNets
arXiv Detail & Related papers (2024-10-12T03:46:52Z) - Looking Backward: Retrospective Backward Synthesis for Goal-Conditioned GFlowNets [27.33222647437964]
Generative Flow Networks (GFlowNets) are amortized sampling methods for learning a policy to sequentially generate objects with probabilities to their rewards.
GFlowNets exhibit a remarkable ability to generate diverse sets of high-reward proportional objects, in contrast to standard reinforcement learning approaches.
Recent works have arisen for learning goal-conditioned GFlowNets to acquire various useful properties, aiming to train a single GFlowNet capable of achieving different goals as the task specifies.
We propose a novel method named Retrospective Backward Synthesis (RBS) to address these challenges. Specifically, RBS synthesizes a new backward trajectory
arXiv Detail & Related papers (2024-06-03T09:44:10Z) - Evolution Guided Generative Flow Networks [11.609895436955242]
Generative Flow Networks (GFlowNets) learn to sample compositional objects proportional to their rewards.
One big challenge of GFlowNets is training them effectively when dealing with long time horizons and sparse rewards.
We propose Evolution guided generative flow networks (EGFN), a simple but powerful augmentation to the GFlowNets training using Evolutionary algorithms (EA)
arXiv Detail & Related papers (2024-02-03T15:28:53Z) - Thompson sampling for improved exploration in GFlowNets [75.89693358516944]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy.
We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
arXiv Detail & Related papers (2023-06-30T14:19:44Z) - Towards Understanding and Improving GFlowNet Training [71.85707593318297]
We introduce an efficient evaluation strategy to compare the learned sampling distribution to the target reward distribution.
We propose prioritized replay training of high-reward $x$, relative edge flow policy parametrization, and a novel guided trajectory balance objective.
arXiv Detail & Related papers (2023-05-11T22:50:41Z) - Stochastic Generative Flow Networks [89.34644133901647]
Generative Flow Networks (or GFlowNets) learn to sample complex structures through the lens of "inference as control"
Existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with dynamics.
This paper introduces GFlowNets, a new algorithm that extends GFlowNets to environments.
arXiv Detail & Related papers (2023-02-19T03:19:40Z) - Distributional GFlowNets with Quantile Flows [73.73721901056662]
Generative Flow Networks (GFlowNets) are a new family of probabilistic samplers where an agent learns a policy for generating complex structure through a series of decision-making steps.
In this work, we adopt a distributional paradigm for GFlowNets, turning each flow function into a distribution, thus providing more informative learning signals during training.
Our proposed textitquantile matching GFlowNet learning algorithm is able to learn a risk-sensitive policy, an essential component for handling scenarios with risk uncertainty.
arXiv Detail & Related papers (2023-02-11T22:06:17Z) - Trajectory balance: Improved credit assignment in GFlowNets [63.687669765579585]
We find previously proposed learning objectives for GFlowNets, flow matching and detailed balance, to be prone to inefficient credit propagation across long action sequences.
We propose a new learning objective for GFlowNets, trajectory balance, as a more efficient alternative to previously used objectives.
In experiments on four distinct domains, we empirically demonstrate the benefits of the trajectory balance objective for GFlowNet convergence, diversity of generated samples, and robustness to long action sequences and large action spaces.
arXiv Detail & Related papers (2022-01-31T14:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.