Learning Generative Models with Goal-conditioned Reinforcement Learning
- URL: http://arxiv.org/abs/2303.14811v1
- Date: Sun, 26 Mar 2023 20:33:44 GMT
- Title: Learning Generative Models with Goal-conditioned Reinforcement Learning
- Authors: Mariana Vargas Vieyra, Pierre M\'enard
- Abstract summary: We present a novel framework for learning generative models with goal-conditioned reinforcement learning.
We empirically demonstrate that our method is able to generate diverse and high quality samples in the task of image synthesis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel, alternative framework for learning generative models with
goal-conditioned reinforcement learning. We define two agents, a goal
conditioned agent (GC-agent) and a supervised agent (S-agent). Given a
user-input initial state, the GC-agent learns to reconstruct the training set.
In this context, elements in the training set are the goals. During training,
the S-agent learns to imitate the GC-agent while remaining agnostic of the
goals. At inference we generate new samples with the S-agent. Following a
similar route as in variational auto-encoders, we derive an upper bound on the
negative log-likelihood that consists of a reconstruction term and a divergence
between the GC-agent policy and the (goal-agnostic) S-agent policy. We
empirically demonstrate that our method is able to generate diverse and high
quality samples in the task of image synthesis.
Related papers
- Agent-as-a-Judge: Evaluate Agents with Agents [61.33974108405561]
We introduce the Agent-as-a-Judge framework, wherein agentic systems are used to evaluate agentic systems.
This is an organic extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving process.
We present DevAI, a new benchmark of 55 realistic automated AI development tasks.
arXiv Detail & Related papers (2024-10-14T17:57:02Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [117.94654815220404]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.
G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - Aligning Agents like Large Language Models [8.873319874424167]
Training agents to behave as desired in complex 3D environments from high-dimensional sensory information is challenging.
We draw an analogy between the undesirable behaviors of imitation learning agents and the unhelpful responses of unaligned large language models (LLMs)
We demonstrate that we can align our agent to consistently perform the desired mode, while providing insights and advice for successfully applying this approach to training agents.
arXiv Detail & Related papers (2024-06-06T16:05:45Z) - AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models [56.00992369295851]
Open-sourced Large Language Models (LLMs) have achieved great success in various NLP tasks, however, they are still far inferior to API-based models when acting as agents.
This paper delivers three key observations: (1) the current agent training corpus is entangled with both formats following and agent reasoning, which significantly shifts from the distribution of its pre-training data; (2) LLMs exhibit different learning speeds on the capabilities required by agent tasks; and (3) current approaches have side-effects when improving agent abilities by introducing hallucinations.
We propose Agent-FLAN to effectively Fine-tune LANguage models for Agents.
arXiv Detail & Related papers (2024-03-19T16:26:10Z) - CCA: Collaborative Competitive Agents for Image Editing [59.54347952062684]
This paper presents a novel generative model, Collaborative Competitive Agents (CCA)
It leverages the capabilities of multiple Large Language Models (LLMs) based agents to execute complex tasks.
The paper's main contributions include the introduction of a multi-agent-based generative model with controllable intermediate steps and iterative optimization.
arXiv Detail & Related papers (2024-01-23T11:46:28Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - A Framework for Understanding and Visualizing Strategies of RL Agents [0.0]
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
We evaluate our framework on combat scenarios in StarCraft II (SC2) using traces from a handcrafted expert policy and a trained reinforcement learning agent.
arXiv Detail & Related papers (2022-08-17T21:58:19Z) - BGC: Multi-Agent Group Belief with Graph Clustering [1.9949730506194252]
We propose a semi-communication method to enable agents can exchange information without communication.
Inspired by the neighborhood cognitive consistency, we propose a group-based module to divide adjacent agents into a small group and minimize in-group agents' beliefs.
Results reveal that the proposed method achieves a significant improvement in the SMAC benchmark.
arXiv Detail & Related papers (2020-08-20T07:07:20Z) - Learning to Model Opponent Learning [11.61673411387596]
Multi-Agent Reinforcement Learning (MARL) considers settings in which a set of coexisting agents interact with one another and their environment.
This poses a great challenge for value function-based algorithms whose convergence usually relies on the assumption of a stationary environment.
We develop a novel approach to modelling an opponent's learning dynamics which we term Learning to Model Opponent Learning (LeMOL)
arXiv Detail & Related papers (2020-06-06T17:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.