MAPLE: Modality-Aware Post-training and Learning Ecosystem
- URL: http://arxiv.org/abs/2602.11596v1
- Date: Thu, 12 Feb 2026 05:26:36 GMT
- Title: MAPLE: Modality-Aware Post-training and Learning Ecosystem
- Authors: Nikhil Verma, Minjung Kim, JooYoung Yoo, Kyung-Min Jin, Manasa Bharadwaj, Kevin Ferreira, Ko Keun Kim, Youngjoon Kim,
- Abstract summary: Existing RL post-training pipelines treat all input signals as equally relevant, ignoring which modalities each task actually requires.<n>We introduce MAPLE, a complete modality-aware post-training and learning ecosystem.<n> MAPLE narrows uni/multi-modal accuracy gaps by 30.24%, converges 3.18x faster, and maintains stability across all modality combinations.
- Score: 6.41025301801655
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multimodal language models now integrate text, audio, and video for unified reasoning. Yet existing RL post-training pipelines treat all input signals as equally relevant, ignoring which modalities each task actually requires. This modality-blind training inflates policy-gradient variance, slows convergence, and degrades robustness to real-world distribution shifts where signals may be missing, added, or reweighted. We introduce MAPLE, a complete modality-aware post-training and learning ecosystem comprising: (1) MAPLE-bench, the first benchmark explicitly annotating minimal signal combinations required per task; (2) MAPO, a modality-aware policy optimization framework that stratifies batches by modality requirement to reduce gradient variance from heterogeneous group advantages; (3) Adaptive weighting and curriculum scheduling that balances and prioritizes harder signal combinations. Systematic analysis across loss aggregation, clipping, sampling, and curriculum design establishes MAPO's optimal training strategy. Adaptive weighting and curriculum focused learning further boost performance across signal combinations. MAPLE narrows uni/multi-modal accuracy gaps by 30.24%, converges 3.18x faster, and maintains stability across all modality combinations under realistic reduced signal access. MAPLE constitutes a complete recipe for deployment-ready multimodal RL post-training.
Related papers
- Meta Hierarchical Reinforcement Learning for Scalable Resource Management in O-RAN [9.290879387995401]
This paper proposes an adaptive Meta Hierarchical Reinforcement Learning framework, inspired by Model Agnostic Meta Learning (MAML)<n>The framework integrates hierarchical control with meta learning to enable both global and local adaptation.<n>It achieves up to 40% faster adaptation and consistent fairness, latency, and throughput performance as network scale increases.
arXiv Detail & Related papers (2025-12-08T08:16:27Z) - CoT-Saliency: Unified Chain-of-Thought Reasoning for Heterogeneous Saliency Tasks [96.64597365827046]
We present the first unified framework that jointly handles three operationally heterogeneous saliency tasks.<n>We introduce a Chain-of-Thought (CoT) reasoning process in a Vision-Language Model (VLM) to bridge task heterogeneity.<n>We show our model matches or outperforms specialized SOTA methods and strong closed-source VLMs across all tasks.
arXiv Detail & Related papers (2025-11-01T04:37:01Z) - Transformer-based Scalable Beamforming Optimization via Deep Residual Learning [12.79709425087431]
unsupervised deep learning framework for downlink beamforming in large-scale MU-MISO channels.<n>Model is trained offline, allowing real-time inference through lightweight feedforward computations in dynamic communication environments.
arXiv Detail & Related papers (2025-10-15T01:43:51Z) - MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources [113.33902847941941]
Variance-Aware Sampling (VAS) is a data selection strategy guided by Variance Promotion Score (VPS)<n>We release large-scale, carefully curated resources containing 1.6M long CoT cold-start data and 15k RL QA pairs.<n> Experiments across mathematical reasoning benchmarks demonstrate the effectiveness of both the curated data and the proposed VAS.
arXiv Detail & Related papers (2025-09-25T14:58:29Z) - SimMLM: A Simple Framework for Multi-modal Learning with Missing Modality [52.948791050405525]
We propose SimMLM, a simple yet powerful framework for multimodal learning with missing modalities.<n>SimMLM consists of a generic Dynamic Mixture of Modality Experts (DMoME) architecture, featuring a dynamic, learnable gating mechanism.<n>Key innovation of SimMLM is the proposed More vs. Fewer (MoFe) ranking loss, which ensures that task accuracy improves or remains stable as more modalities are made available.
arXiv Detail & Related papers (2025-07-25T13:39:34Z) - Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs [51.21041884010009]
Ring-lite is a Mixture-of-Experts (MoE)-based large language model optimized via reinforcement learning (RL)<n>Our approach matches the performance of state-of-the-art (SOTA) small-scale reasoning models on challenging benchmarks.
arXiv Detail & Related papers (2025-06-17T17:12:34Z) - Improving Multimodal Learning Balance and Sufficiency through Data Remixing [14.282792733217653]
Methods for enforcing the weak modality fail to achieve unimodal sufficiency and multimodal balance.<n>We propose multimodal Data Remixing, including decoupling multimodal data and filtering hard samples for each modality to mitigate modality imbalance.<n>Our method can be seamlessly integrated with existing approaches, improving accuracy by approximately 6.50%$uparrow$ on CREMAD and 3.41%$uparrow$ on Kinetic-Sounds.
arXiv Detail & Related papers (2025-06-13T08:01:29Z) - Flow-GRPO: Training Flow Matching Models via Online RL [80.62659379624867]
We propose Flow-GRPO, the first method to integrate online policy reinforcement learning into flow matching models.<n>Our approach uses two key strategies: (1) an ODE-to-SDE conversion that transforms a deterministic Ordinary Differential Equation into an equivalent Differential Equation (SDE) that matches the original model's marginal distribution at all timesteps; and (2) a Denoising Reduction strategy that reduces training denoising steps while retaining the original number of inference steps.
arXiv Detail & Related papers (2025-05-08T17:58:45Z) - Balancing Multimodal Training Through Game-Theoretic Regularization [26.900302082724295]
Multimodal learning holds promise for richer information extraction by capturing dependencies across data sources.<n>Yet, current training methods often underperform due to modality competition.<n>This paper proposes the Multimodal Competition Regularizer (MCR), inspired by a mutual information (MI) decomposition.
arXiv Detail & Related papers (2024-11-11T19:53:05Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.