Learning to Scale Logits for Temperature-Conditional GFlowNets
- URL: http://arxiv.org/abs/2310.02823v3
- Date: Sun, 2 Jun 2024 05:07:36 GMT
- Title: Learning to Scale Logits for Temperature-Conditional GFlowNets
- Authors: Minsu Kim, Joohwan Ko, Taeyoung Yun, Dinghuai Zhang, Ling Pan, Woochang Kim, Jinkyoo Park, Emmanuel Bengio, Yoshua Bengio,
- Abstract summary: We propose textitLogit-scaling GFlowNets (Logit-GFN), a novel architectural design that greatly accelerates the training of temperature-conditional GFlowNets.
We find that the challenge is greatly reduced if a learned function of the temperature is used to scale the policy's logits directly.
- Score: 77.36931187299896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: GFlowNets are probabilistic models that sequentially generate compositional structures through a stochastic policy. Among GFlowNets, temperature-conditional GFlowNets can introduce temperature-based controllability for exploration and exploitation. We propose \textit{Logit-scaling GFlowNets} (Logit-GFN), a novel architectural design that greatly accelerates the training of temperature-conditional GFlowNets. It is based on the idea that previously proposed approaches introduced numerical challenges in the deep network training, since different temperatures may give rise to very different gradient profiles as well as magnitudes of the policy's logits. We find that the challenge is greatly reduced if a learned function of the temperature is used to scale the policy's logits directly. Also, using Logit-GFN, GFlowNets can be improved by having better generalization capabilities in offline learning and mode discovery capabilities in online learning, which is empirically verified in various biological and chemical tasks. Our code is available at \url{https://github.com/dbsxodud-11/logit-gfn}
Related papers
- Improving GFlowNets with Monte Carlo Tree Search [6.497027864860203]
Recent studies have revealed strong connections between GFlowNets and entropy-regularized reinforcement learning.
We propose to enhance planning capabilities of GFlowNets by applying Monte Carlo Tree Search (MCTS)
Our experiments demonstrate that this approach improves the sample efficiency of GFlowNet training and the generation fidelity of pre-trained GFlowNet models.
arXiv Detail & Related papers (2024-06-19T15:58:35Z) - Investigating Generalization Behaviours of Generative Flow Networks [3.4642376250601017]
We empirically verify some of the hypothesized mechanisms of generalization of GFlowNets.
We find that the functions that GFlowNets learn to approximate have an implicit underlying structure which facilitate generalization.
We also find that GFlowNets are sensitive to being trained offline and off-policy; however, the reward implicitly learned by GFlowNets is robust to changes in the training distribution.
arXiv Detail & Related papers (2024-02-07T23:02:53Z) - Learning Energy Decompositions for Partial Inference of GFlowNets [34.209530834968206]
We study generative flow networks (GFlowNets) to sample objects from the Boltzmann energy distribution via a sequence of actions.
In particular, we focus on improving GFlowNet with partial inference: training flow functions with the evaluation of the intermediate states or transitions.
arXiv Detail & Related papers (2023-10-05T04:02:36Z) - Stochastic Generative Flow Networks [89.34644133901647]
Generative Flow Networks (or GFlowNets) learn to sample complex structures through the lens of "inference as control"
Existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with dynamics.
This paper introduces GFlowNets, a new algorithm that extends GFlowNets to environments.
arXiv Detail & Related papers (2023-02-19T03:19:40Z) - Distributional GFlowNets with Quantile Flows [73.73721901056662]
Generative Flow Networks (GFlowNets) are a new family of probabilistic samplers where an agent learns a policy for generating complex structure through a series of decision-making steps.
In this work, we adopt a distributional paradigm for GFlowNets, turning each flow function into a distribution, thus providing more informative learning signals during training.
Our proposed textitquantile matching GFlowNet learning algorithm is able to learn a risk-sensitive policy, an essential component for handling scenarios with risk uncertainty.
arXiv Detail & Related papers (2023-02-11T22:06:17Z) - A theory of continuous generative flow networks [104.93913776866195]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that are trained to sample from unnormalized target distributions.
We present a theory for generalized GFlowNets, which encompasses both existing discrete GFlowNets and ones with continuous or hybrid state spaces.
arXiv Detail & Related papers (2023-01-30T00:37:56Z) - Learning GFlowNets from partial episodes for improved convergence and
stability [56.99229746004125]
Generative flow networks (GFlowNets) are algorithms for training a sequential sampler of discrete objects under an unnormalized target density.
Existing training objectives for GFlowNets are either local to states or transitions, or propagate a reward signal over an entire sampling trajectory.
Inspired by the TD($lambda$) algorithm in reinforcement learning, we introduce subtrajectory balance or SubTB($lambda$), a GFlowNet training objective that can learn from partial action subsequences of varying lengths.
arXiv Detail & Related papers (2022-09-26T15:44:24Z) - Generative Flow Networks for Discrete Probabilistic Modeling [118.81967600750428]
We present energy-based generative flow networks (EB-GFN)
EB-GFN is a novel probabilistic modeling algorithm for high-dimensional discrete data.
We show how GFlowNets can approximately perform large-block Gibbs sampling to mix between modes.
arXiv Detail & Related papers (2022-02-03T01:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.