Flow Actor-Critic for Offline Reinforcement Learning
- URL: http://arxiv.org/abs/2602.18015v1
- Date: Fri, 20 Feb 2026 06:11:12 GMT
- Title: Flow Actor-Critic for Offline Reinforcement Learning
- Authors: Jongseong Chae, Jongeui Park, Yongjae Shin, Gyeongmin Kim, Seungyul Han, Youngchul Sung,
- Abstract summary: We propose Flow Actor-Critic, a new actor-critic method for offline RL, based on recent flow policies.<n>We achieve new state-of-the-art performance for test datasets of offline RL including the D4RL and recent OGBench benchmarks.
- Score: 20.074534038481666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dataset distributions in offline reinforcement learning (RL) often exhibit complex and multi-modal distributions, necessitating expressive policies to capture such distributions beyond widely-used Gaussian policies. To handle such complex and multi-modal datasets, in this paper, we propose Flow Actor-Critic, a new actor-critic method for offline RL, based on recent flow policies. The proposed method not only uses the flow model for actor as in previous flow policies but also exploits the expressive flow model for conservative critic acquisition to prevent Q-value explosion in out-of-data regions. To this end, we propose a new form of critic regularizer based on the flow behavior proxy model obtained as a byproduct of flow-based actor design. Leveraging the flow model in this joint way, we achieve new state-of-the-art performance for test datasets of offline RL including the D4RL and recent OGBench benchmarks.
Related papers
- Causal Flow Q-Learning for Robust Offline Reinforcement Learning [53.63254824501714]
We introduce a practical implementation that learns expressive flow-matching policies from confounded demonstrations.<n>Our proposed confounding-robust augmentation procedure achieves a success rate 120% that of confounding-unaware, state-of-the-art offline RL methods.
arXiv Detail & Related papers (2026-02-02T21:50:52Z) - Scalable Offline Model-Based RL with Action Chunks [60.80151356018376]
We study whether model-based reinforcement learning can provide a scalable recipe for tackling complex, long-horizon tasks in offline RL.<n>We call this recipe textbfModel-Based RL with Action Chunks (MAC).<n>We show that MAC achieves the best performance among offline model-based RL algorithms, especially on challenging long-horizon tasks.
arXiv Detail & Related papers (2025-12-08T23:26:29Z) - Unleashing Flow Policies with Distributional Critics [15.149475517073258]
We introduce the Distributional Flow Critic (DFC), a novel critic architecture that learns the complete state-action return distribution.<n>DFC provides the expressive flow-based policy with a rich, distributional Bellman target, which offers a more stable and informative learning signal.
arXiv Detail & Related papers (2025-09-27T03:51:06Z) - One-Step Flow Policy Mirror Descent [52.31612487608593]
Flow Policy Mirror Descent (FPMD) is an online RL algorithm that enables 1-step sampling during flow policy inference.<n>Our approach exploits a theoretical connection between the distribution variance and the discretization error of single-step sampling in straight flow matching models.
arXiv Detail & Related papers (2025-07-31T15:51:10Z) - Decision Flow Policy Optimization [53.825268058199825]
We show that generative models can effectively model complex multi-modal action distributions and achieve superior robotic control in continuous action spaces.<n>Previous methods usually adopt the generative models as behavior models to fit state-conditioned action distributions from datasets.<n>We propose Decision Flow, a unified framework that integrates multi-modal action distribution modeling and policy optimization.
arXiv Detail & Related papers (2025-05-26T03:42:20Z) - Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization [14.320131946691268]
We propose an easy-to-use and theoretically sound fine-tuning method for flow-based generative models.<n>By introducing an online rewardweighting mechanism, our approach guides the model to prioritize high-reward regions in the data manifold.<n>Our method achieves optimal policy convergence while allowing controllable trade-offs between reward and diversity.
arXiv Detail & Related papers (2025-02-09T22:45:15Z) - Diffusion Policies as an Expressive Policy Class for Offline
Reinforcement Learning [70.20191211010847]
Offline reinforcement learning (RL) aims to learn an optimal policy using a previously collected static dataset.
We introduce Diffusion Q-learning (Diffusion-QL) that utilizes a conditional diffusion model to represent the policy.
We show that our method can achieve state-of-the-art performance on the majority of the D4RL benchmark tasks.
arXiv Detail & Related papers (2022-08-12T09:54:11Z) - MOPO: Model-based Offline Policy Optimization [183.6449600580806]
offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data.
We show that an existing model-based RL algorithm already produces significant gains in the offline setting.
We propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics.
arXiv Detail & Related papers (2020-05-27T08:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.