Distributional GFlowNets with Quantile Flows
- URL: http://arxiv.org/abs/2302.05793v3
- Date: Sat, 17 Feb 2024 16:11:17 GMT
- Title: Distributional GFlowNets with Quantile Flows
- Authors: Dinghuai Zhang, Ling Pan, Ricky T. Q. Chen, Aaron Courville, Yoshua
Bengio
- Abstract summary: Generative Flow Networks (GFlowNets) are a new family of probabilistic samplers where an agent learns a policy for generating complex structure through a series of decision-making steps.
In this work, we adopt a distributional paradigm for GFlowNets, turning each flow function into a distribution, thus providing more informative learning signals during training.
Our proposed textitquantile matching GFlowNet learning algorithm is able to learn a risk-sensitive policy, an essential component for handling scenarios with risk uncertainty.
- Score: 73.73721901056662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Flow Networks (GFlowNets) are a new family of probabilistic
samplers where an agent learns a stochastic policy for generating complex
combinatorial structure through a series of decision-making steps. Despite
being inspired from reinforcement learning, the current GFlowNet framework is
relatively limited in its applicability and cannot handle stochasticity in the
reward function. In this work, we adopt a distributional paradigm for
GFlowNets, turning each flow function into a distribution, thus providing more
informative learning signals during training. By parameterizing each edge flow
through their quantile functions, our proposed \textit{quantile matching}
GFlowNet learning algorithm is able to learn a risk-sensitive policy, an
essential component for handling scenarios with risk uncertainty. Moreover, we
find that the distributional approach can achieve substantial improvement on
existing benchmarks compared to prior methods due to our enhanced training
algorithm, even in settings with deterministic rewards.
Related papers
- On Generalization for Generative Flow Networks [54.20924253330039]
Generative Flow Networks (GFlowNets) have emerged as an innovative learning paradigm designed to address the challenge of sampling from an unnormalized probability distribution.
This paper attempts to formalize generalization in the context of GFlowNets, to link generalization with stability, and also to design experiments that assess the capacity of these models to uncover unseen parts of the reward function.
arXiv Detail & Related papers (2024-07-03T13:42:21Z) - Generative Flow Networks as Entropy-Regularized RL [4.857649518812728]
generative flow networks (GFlowNets) are a method of training a policy to sample compositional objects with proportional probabilities to a given reward via a sequence of actions.
We demonstrate how the task of learning a generative flow network can be efficiently as an entropy-regularized reinforcement learning problem.
Contrary to previously reported results, we show that entropic RL approaches can be competitive against established GFlowNet training methods.
arXiv Detail & Related papers (2023-10-19T17:31:40Z) - Thompson sampling for improved exploration in GFlowNets [75.89693358516944]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy.
We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
arXiv Detail & Related papers (2023-06-30T14:19:44Z) - Generative Flow Networks for Precise Reward-Oriented Active Learning on
Graphs [34.76241250013461]
We formulate the graph active learning problem as a generative process, named GFlowGNN, which generates various samples through sequential actions.
We show that the proposed approach has good exploration capability and transferability, outperforming various state-of-the-art methods.
arXiv Detail & Related papers (2023-04-24T10:47:08Z) - Stochastic Generative Flow Networks [89.34644133901647]
Generative Flow Networks (or GFlowNets) learn to sample complex structures through the lens of "inference as control"
Existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with dynamics.
This paper introduces GFlowNets, a new algorithm that extends GFlowNets to environments.
arXiv Detail & Related papers (2023-02-19T03:19:40Z) - Generative Augmented Flow Networks [88.50647244459009]
We propose Generative Augmented Flow Networks (GAFlowNets) to incorporate intermediate rewards into GFlowNets.
GAFlowNets can leverage edge-based and state-based intrinsic rewards in a joint way to improve exploration.
arXiv Detail & Related papers (2022-10-07T03:33:56Z) - Learning GFlowNets from partial episodes for improved convergence and
stability [56.99229746004125]
Generative flow networks (GFlowNets) are algorithms for training a sequential sampler of discrete objects under an unnormalized target density.
Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory.
Inspired by the TD($lambda$) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB($lambda$), a GFlowNet training objective that can learn from partial action subsequences of varying lengths.
arXiv Detail & Related papers (2022-09-26T15:44:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.