Adaptive Reinforcement Learning for Dynamic Configuration Allocation in Pre-Production Testing
- URL: http://arxiv.org/abs/2510.05147v1
- Date: Thu, 02 Oct 2025 05:12:28 GMT
- Title: Adaptive Reinforcement Learning for Dynamic Configuration Allocation in Pre-Production Testing
- Authors: Yu Zhu,
- Abstract summary: We introduce a novel reinforcement learning framework that recasts configuration allocation as a sequential decision-making problem.<n>Our method is the first to integrate Q-learning with a hybrid reward design that fuses simulated outcomes and real-time feedback.
- Score: 4.370892281528124
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ensuring reliability in modern software systems requires rigorous pre-production testing across highly heterogeneous and evolving environments. Because exhaustive evaluation is infeasible, practitioners must decide how to allocate limited testing resources across configurations where failure probabilities may drift over time. Existing combinatorial optimization approaches are static, ad hoc, and poorly suited to such non-stationary settings. We introduce a novel reinforcement learning (RL) framework that recasts configuration allocation as a sequential decision-making problem. Our method is the first to integrate Q-learning with a hybrid reward design that fuses simulated outcomes and real-time feedback, enabling both sample efficiency and robustness. In addition, we develop an adaptive online-offline training scheme that allows the agent to quickly track abrupt probability shifts while maintaining long-run stability. Extensive simulation studies demonstrate that our approach consistently outperforms static and optimization-based baselines, approaching oracle performance. This work establishes RL as a powerful new paradigm for adaptive configuration allocation, advancing beyond traditional methods and offering broad applicability to dynamic testing and resource scheduling domains.
Related papers
- Stabilizing Test-Time Adaptation of High-Dimensional Simulation Surrogates via D-Optimal Statistics [23.824598203175455]
Test-Time Adaptation (TTA) can mitigate distribution shifts between training and deployment of machine learning surrogates.<n>We propose a TTA framework based on storing maximally informative (D-optimal) statistics.<n>Our method yields up to 7% out-of-distribution improvements at negligible computational cost.
arXiv Detail & Related papers (2026-02-17T18:55:18Z) - Not All Preferences Are Created Equal: Stability-Aware and Gradient-Efficient Alignment for Reasoning Models [52.48582333951919]
We propose a dynamic framework designed to enhance alignment reliability by maximizing the Signal-to-Noise Ratio of policy updates.<n>SAGE (Stability-Aware Gradient Efficiency) integrates a coarse-grained curriculum mechanism that refreshes candidate pools based on model competence.<n> Experiments on multiple mathematical reasoning benchmarks demonstrate that SAGE significantly accelerates convergence and outperforms static baselines.
arXiv Detail & Related papers (2026-02-01T12:56:10Z) - Online Matching via Reinforcement Learning: An Expert Policy Orchestration Strategy [5.913458789333235]
We propose a reinforcement learning (RL) approach that learns to orchestrate a set of such expert policies.<n>We establish both expectation and high-probability regret guarantees and derive a novel finite-time bias bound for temporal-difference learning.<n>Our results highlight how structured, adaptive learning can improve the modeling and management of complex resource allocation and decision-making processes.
arXiv Detail & Related papers (2025-10-07T23:26:16Z) - Flexible Locomotion Learning with Diffusion Model Predictive Control [46.432397190673505]
We present Diffusion-MPC, which leverages a learned generative diffusion model as an approximate dynamics prior for planning.<n>Our design enables strong test-time adaptability, allowing the planner to adjust to new reward specifications without retraining.<n>We validate Diffusion-MPC on real world, demonstrating strong locomotion and flexible adaptation.
arXiv Detail & Related papers (2025-10-05T14:51:13Z) - Stabilizing Policy Gradients for Sample-Efficient Reinforcement Learning in LLM Reasoning [77.92320830700797]
Reinforcement Learning has played a central role in enabling reasoning capabilities of Large Language Models.<n>We propose a tractable computational framework that tracks and leverages curvature information during policy updates.<n>The algorithm, Curvature-Aware Policy Optimization (CAPO), identifies samples that contribute to unstable updates and masks them out.
arXiv Detail & Related papers (2025-10-01T12:29:32Z) - Steerable Adversarial Scenario Generation through Test-Time Preference Alignment [58.37104890690234]
Adversarial scenario generation is a cost-effective approach for safety assessment of autonomous driving systems.<n>We introduce a new framework named textbfSteerable textbfAdversarial scenario textbfGEnerator (SAGE)<n>SAGE enables fine-grained test-time control over the trade-off between adversariality and realism without any retraining.
arXiv Detail & Related papers (2025-09-24T13:27:35Z) - Simulation-Driven Reinforcement Learning in Queuing Network Routing Optimization [0.0]
This study focuses on the development of a simulation-driven reinforcement learning (RL) framework for optimizing routing decisions in complex queueing network systems.<n>We propose a robust RL approach leveraging Deep Deterministic Policy Gradient (DDPG) combined with Dyna-style planning (Dyna-DDPG)<n> Comprehensive experiments and rigorous evaluations demonstrate the framework's capability to rapidly learn effective routing policies.
arXiv Detail & Related papers (2025-07-24T20:32:47Z) - FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - Environment Transformer and Policy Optimization for Model-Based Offline
Reinforcement Learning [25.684201757101267]
We propose an uncertainty-aware sequence modeling architecture called Environment Transformer.
Benefiting from the accurate modeling of the transition dynamics and reward function, Environment Transformer can be combined with arbitrary planning, dynamics programming, or policy optimization algorithms for offline RL.
arXiv Detail & Related papers (2023-03-07T11:26:09Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Conformalized Online Learning: Online Calibration Without a Holdout Set [10.420394952839242]
We develop a framework for constructing uncertainty sets with a valid coverage guarantee in an online setting.
We show how to construct valid intervals for a multiple-output regression problem.
arXiv Detail & Related papers (2022-05-18T17:41:37Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.