Deep RL with Hierarchical Action Exploration for Dialogue Generation
- URL: http://arxiv.org/abs/2303.13465v3
- Date: Mon, 15 May 2023 08:04:18 GMT
- Title: Deep RL with Hierarchical Action Exploration for Dialogue Generation
- Authors: Itsugun Cho, Ryota Takahashi, Yusaku Yanase, Hiroaki Saito
- Abstract summary: This paper presents theoretical analysis and experiments that reveal the performance of the dialogue policy is positively correlated with the sampling size.
We introduce a novel dual-granularity Q-function that explores the most promising response category to intervene in the sampling process.
Our algorithm exhibits both explainability and controllability and generates responses with higher expected rewards.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditionally, approximate dynamic programming is employed in dialogue
generation with greedy policy improvement through action sampling, as the
natural language action space is vast. However, this practice is inefficient
for reinforcement learning (RL) due to the sparsity of eligible responses with
high action values, which leads to weak improvement sustained by random
sampling. This paper presents theoretical analysis and experiments that reveal
the performance of the dialogue policy is positively correlated with the
sampling size. To overcome this limitation, we introduce a novel
dual-granularity Q-function that explores the most promising response category
to intervene in the sampling process. Our approach extracts actions based on a
grained hierarchy, thereby achieving the optimum with fewer policy iterations.
Additionally, we use offline RL and learn from multiple reward functions
designed to capture emotional nuances in human interactions. Empirical studies
demonstrate that our algorithm outperforms baselines across automatic metrics
and human evaluations. Further testing reveals that our algorithm exhibits both
explainability and controllability and generates responses with higher expected
rewards.
Related papers
- Action abstractions for amortized sampling [49.384037138511246]
We propose an approach to incorporate the discovery of action abstractions, or high-level actions, into the policy optimization process.
Our approach involves iteratively extracting action subsequences commonly used across many high-reward trajectories and chunking' them into a single action that is added to the action space.
arXiv Detail & Related papers (2024-10-19T19:22:50Z) - Enabling Real-Time Conversations with Minimal Training Costs [61.80370154101649]
This paper presents a new duplex decoding approach that enhances large language models with duplex ability, requiring minimal training.
Experimental results indicate that our proposed method significantly enhances the naturalness and human-likeness of user-AI interactions with minimal training costs.
arXiv Detail & Related papers (2024-09-18T06:27:26Z) - Multi-turn Reinforcement Learning from Preference Human Feedback [41.327438095745315]
Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning Large Language Models with human preferences.
Existing methods work by emulating the preferences at the single decision (turn) level.
We develop novel methods for Reinforcement Learning from preference feedback between two full multi-turn conversations.
arXiv Detail & Related papers (2024-05-23T14:53:54Z) - LIRE: listwise reward enhancement for preference alignment [27.50204023448716]
We propose a gradient-based reward optimization approach that incorporates the offline rewards of multiple responses into a streamlined listwise framework.
LIRE is straightforward to implement, requiring minimal parameter tuning, and seamlessly aligns with the pairwise paradigm.
Our experiments demonstrate that LIRE consistently outperforms existing methods across several benchmarks on dialogue and summarization tasks.
arXiv Detail & Related papers (2024-05-22T10:21:50Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - Taming Continuous Posteriors for Latent Variational Dialogue Policies [1.0312968200748118]
We revisit Gaussian variational posteriors for latent-action RL and show that they can yield even better performance than categoricals.
We achieve this by simplifying the training procedure and propose ways to regularize the latent dialogue policy.
arXiv Detail & Related papers (2022-05-16T12:50:32Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z) - Cross-sentence Neural Language Models for Conversational Speech
Recognition [17.317583079824423]
We propose an effective cross-sentence neural LM approach that reranks the ASR N-best hypotheses of an upcoming sentence.
We also explore to extract task-specific global topical information of the cross-sentence history.
arXiv Detail & Related papers (2021-06-13T05:30:16Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.