MaskFocus: Focusing Policy Optimization on Critical Steps for Masked Image Generation
- URL: http://arxiv.org/abs/2512.18766v1
- Date: Sun, 21 Dec 2025 15:08:31 GMT
- Title: MaskFocus: Focusing Policy Optimization on Critical Steps for Masked Image Generation
- Authors: Guohui Zhang, Hu Yu, Xiaoxiao Ma, Yaning Pan, Hang Xu, Feng Zhao,
- Abstract summary: We present MaskFocus, a novel RL framework that achieves effective policy optimization for masked generative models.<n>Specifically, we determine the step-level information gain by measuring the similarity between the intermediate images at each sampling step and the final generated image.<n>We leverage this to identify the most critical and valuable steps and execute focused policy optimization on them.
- Score: 21.160947261963088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning (RL) has demonstrated significant potential for post-training language models and autoregressive visual generative models, but adapting RL to masked generative models remains challenging. The core factor is that policy optimization requires accounting for the probability likelihood of each step due to its multi-step and iterative refinement process. This reliance on entire sampling trajectories introduces high computational cost, whereas natively optimizing random steps often yields suboptimal results. In this paper, we present MaskFocus, a novel RL framework that achieves effective policy optimization for masked generative models by focusing on critical steps. Specifically, we determine the step-level information gain by measuring the similarity between the intermediate images at each sampling step and the final generated image. Crucially, we leverage this to identify the most critical and valuable steps and execute focused policy optimization on them. Furthermore, we design a dynamic routing sampling mechanism based on entropy to encourage the model to explore more valuable masking strategies for samples with low entropy. Extensive experiments on multiple Text-to-Image benchmarks validate the effectiveness of our method.
Related papers
- Know Your Step: Faster and Better Alignment for Flow Matching Models via Step-aware Advantages [6.470160796651034]
We propose a novel framework for training flow matching text to image models into efficient few step generators well aligned with human preferences.<n>We show that TAFS GRPO achieves strong performance in few step text to image generation and significantly improves the alignment of generated images with human preferences.
arXiv Detail & Related papers (2026-02-02T03:32:00Z) - Co-GRPO: Co-Optimized Group Relative Policy Optimization for Masked Diffusion Model [74.99242687133408]
Masked Diffusion Models (MDMs) have shown promising potential across vision, language, and cross-modal generation.<n>We introduce Co-GRPO, which reformulates MDM generation as a unified Markov Decision Process (MDP) that jointly incorporates both the model and the inference schedule.
arXiv Detail & Related papers (2025-12-25T12:06:04Z) - Consolidating Reinforcement Learning for Multimodal Discrete Diffusion Models [40.82263997290613]
We introduce MaskGRPO, the first viable approach to enable scalable multimodal reinforcement learning in discrete diffusion.<n>MaskGRPO brings more stable and efficient updates, leading to stronger reasoning performance and better generation quality.
arXiv Detail & Related papers (2025-10-03T10:36:24Z) - TCPO: Thought-Centric Preference Optimization for Effective Embodied Decision-making [75.29820290660065]
This paper proposes Thought-Centric Preference Optimization ( TCPO) for effective embodied decision-making.<n>It emphasizes the alignment of the model's intermediate reasoning process, mitigating the problem of model degradation.<n>Experiments in the ALFWorld environment demonstrate an average success rate of 26.67%, achieving a 6% improvement over RL4VLM.
arXiv Detail & Related papers (2025-09-10T11:16:21Z) - Domain Adaptation of LLMs for Process Data [7.611051482274626]
Large Language Models (LLMs) have emerged as a prominent area of interest across various research domains, including Process Mining (PM)<n>This study investigates the direct adaptation of pretrained LLMs to process data without natural language reformulation.<n>More specifically, we focus on parameter-efficient fine-tuning techniques to mitigate the computational overhead typically associated with such models.
arXiv Detail & Related papers (2025-09-03T09:21:35Z) - From Data-Centric to Sample-Centric: Enhancing LLM Reasoning via Progressive Optimization [7.531052649961168]
Reinforcement learning with verifiable rewards (RLVR) has recently advanced the reasoning capabilities of large language models (LLMs)<n>We investigate RLVR from a sample-centric perspective and introduce LPPO, a framework of progressive optimization techniques.<n>Our work addresses a critical question: how to best leverage a small set of trusted, high-quality demonstrations, rather than simply scaling up data volume.
arXiv Detail & Related papers (2025-07-09T06:05:28Z) - Fine-Tuning Next-Scale Visual Autoregressive Models with Group Relative Policy Optimization [1.1510009152620668]
Fine-tuning pre-trained generative models with Reinforcement Learning (RL) has emerged as an effective approach for aligning outputs with human preferences.<n>We show that RL-based fine-tuning is both efficient and effective for VAR models, benefiting particularly from their fast inference speeds.
arXiv Detail & Related papers (2025-05-29T10:45:38Z) - Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step [86.69947123512836]
Chain-of-Thought (CoT) reasoning has been extensively explored in large models to tackle complex understanding tasks.<n>We provide the first comprehensive investigation of the potential of CoT reasoning to enhance autoregressive image generation.<n>We propose the Potential Assessment Reward Model (PARM) and PARM++, specialized for autoregressive image generation.
arXiv Detail & Related papers (2025-01-23T18:59:43Z) - Bellman Optimal Stepsize Straightening of Flow-Matching Models [14.920260435839992]
This paper introduces Bellman Optimal Stepsize Straightening (BOSS) technique for distilling flow-matching generative models.
BOSS aims specifically for a few-step efficient image sampling while adhering to a computational budget constraint.
Our results reveal that BOSS achieves substantial gains in efficiency while maintaining competitive sample quality.
arXiv Detail & Related papers (2023-12-27T05:20:20Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Reparameterized Policy Learning for Multimodal Trajectory Optimization [61.13228961771765]
We investigate the challenge of parametrizing policies for reinforcement learning in high-dimensional continuous action spaces.
We propose a principled framework that models the continuous RL policy as a generative model of optimal trajectories.
We present a practical model-based RL method, which leverages the multimodal policy parameterization and learned world model.
arXiv Detail & Related papers (2023-07-20T09:05:46Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.