Generative Augmented Flow Networks
- URL: http://arxiv.org/abs/2210.03308v1
- Date: Fri, 7 Oct 2022 03:33:56 GMT
- Title: Generative Augmented Flow Networks
- Authors: Ling Pan and Dinghuai Zhang and Aaron Courville and Longbo Huang and
Yoshua Bengio
- Abstract summary: We propose Generative Augmented Flow Networks (GAFlowNets) to incorporate intermediate rewards into GFlowNets.
GAFlowNets can leverage edge-based and state-based intrinsic rewards in a joint way to improve exploration.
- Score: 88.50647244459009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Generative Flow Network is a probabilistic framework where an agent
learns a stochastic policy for object generation, such that the probability of
generating an object is proportional to a given reward function. Its
effectiveness has been shown in discovering high-quality and diverse solutions,
compared to reward-maximizing reinforcement learning-based methods.
Nonetheless, GFlowNets only learn from rewards of the terminal states, which
can limit its applicability. Indeed, intermediate rewards play a critical role
in learning, for example from intrinsic motivation to provide intermediate
feedback even in particularly challenging sparse reward tasks. Inspired by
this, we propose Generative Augmented Flow Networks (GAFlowNets), a novel
learning framework to incorporate intermediate rewards into GFlowNets. We
specify intermediate rewards by intrinsic motivation to tackle the exploration
problem in sparse reward environments. GAFlowNets can leverage edge-based and
state-based intrinsic rewards in a joint way to improve exploration. Based on
extensive experiments on the GridWorld task, we demonstrate the effectiveness
and efficiency of GAFlowNet in terms of convergence, performance, and diversity
of solutions. We further show that GAFlowNet is scalable to a more complex and
large-scale molecule generation domain, where it achieves consistent and
significant performance improvement.
Related papers
- On Generalization for Generative Flow Networks [54.20924253330039]
Generative Flow Networks (GFlowNets) have emerged as an innovative learning paradigm designed to address the challenge of sampling from an unnormalized probability distribution.
This paper attempts to formalize generalization in the context of GFlowNets, to link generalization with stability, and also to design experiments that assess the capacity of these models to uncover unseen parts of the reward function.
arXiv Detail & Related papers (2024-07-03T13:42:21Z) - Baking Symmetry into GFlowNets [58.932776403471635]
GFlowNets have exhibited promising performance in generating diverse candidates with high rewards.
This study aims to integrate symmetries into GFlowNets by identifying equivalent actions during the generation process.
arXiv Detail & Related papers (2024-06-08T10:11:10Z) - Looking Backward: Retrospective Backward Synthesis for Goal-Conditioned GFlowNets [27.33222647437964]
Generative Flow Networks (GFlowNets) are amortized sampling methods for learning a policy to sequentially generate objects with probabilities to their rewards.
GFlowNets exhibit a remarkable ability to generate diverse sets of high-reward proportional objects, in contrast to standard reinforcement learning approaches.
Recent works have arisen for learning goal-conditioned GFlowNets to acquire various useful properties, aiming to train a single GFlowNet capable of achieving different goals as the task specifies.
We propose a novel method named Retrospective Backward Synthesis (RBS) to address these challenges. Specifically, RBS synthesizes a new backward trajectory
arXiv Detail & Related papers (2024-06-03T09:44:10Z) - Evolution Guided Generative Flow Networks [11.609895436955242]
Generative Flow Networks (GFlowNets) learn to sample compositional objects proportional to their rewards.
One big challenge of GFlowNets is training them effectively when dealing with long time horizons and sparse rewards.
We propose Evolution guided generative flow networks (EGFN), a simple but powerful augmentation to the GFlowNets training using Evolutionary algorithms (EA)
arXiv Detail & Related papers (2024-02-03T15:28:53Z) - Pre-Training and Fine-Tuning Generative Flow Networks [61.90529626590415]
We introduce a novel approach for reward-free pre-training of GFlowNets.
By framing the training as a self-supervised problem, we propose an outcome-conditioned GFlowNet that learns to explore the candidate space.
We show that the pre-trained OC-GFN model can allow for a direct extraction of a policy capable of sampling from any new reward functions in downstream tasks.
arXiv Detail & Related papers (2023-10-05T09:53:22Z) - Stochastic Generative Flow Networks [89.34644133901647]
Generative Flow Networks (or GFlowNets) learn to sample complex structures through the lens of "inference as control"
Existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with dynamics.
This paper introduces GFlowNets, a new algorithm that extends GFlowNets to environments.
arXiv Detail & Related papers (2023-02-19T03:19:40Z) - Distributional GFlowNets with Quantile Flows [73.73721901056662]
Generative Flow Networks (GFlowNets) are a new family of probabilistic samplers where an agent learns a policy for generating complex structure through a series of decision-making steps.
In this work, we adopt a distributional paradigm for GFlowNets, turning each flow function into a distribution, thus providing more informative learning signals during training.
Our proposed textitquantile matching GFlowNet learning algorithm is able to learn a risk-sensitive policy, an essential component for handling scenarios with risk uncertainty.
arXiv Detail & Related papers (2023-02-11T22:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.