Capacity-Constrained Continual Learning
- URL: http://arxiv.org/abs/2507.21479v1
- Date: Tue, 29 Jul 2025 03:47:22 GMT
- Title: Capacity-Constrained Continual Learning
- Authors: Zheng Wen, Doina Precup, Benjamin Van Roy, Satinder Singh,
- Abstract summary: This paper studies how agents with limited capacity should allocate their resources for optimal performance.<n>We derive a solution to the capacity-constrained linear-quadratic-Gaussian sequential prediction problem.<n>For problems that can be decomposed into a set of sub-problems, we also demonstrate how to optimally allocate capacity across these sub-problems in the steady state.
- Score: 64.70016365121081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Any agents we can possibly build are subject to capacity constraints, as memory and compute resources are inherently finite. However, comparatively little attention has been dedicated to understanding how agents with limited capacity should allocate their resources for optimal performance. The goal of this paper is to shed some light on this question by studying a simple yet relevant continual learning problem: the capacity-constrained linear-quadratic-Gaussian (LQG) sequential prediction problem. We derive a solution to this problem under appropriate technical conditions. Moreover, for problems that can be decomposed into a set of sub-problems, we also demonstrate how to optimally allocate capacity across these sub-problems in the steady state. We view the results of this paper as a first step in the systematic theoretical study of learning under capacity constraints.
Related papers
- Situational-Constrained Sequential Resources Allocation via Reinforcement Learning [17.8234166913582]
Sequential Resource Allocation with situational constraints presents a significant challenge in real-world applications.<n>This paper introduces a novel framework, SCRL, to address this problem.<n>We develop a new algorithm that dynamically penalizes constraint violations.
arXiv Detail & Related papers (2025-06-17T02:40:49Z) - Convergences for Minimax Optimization Problems over Infinite-Dimensional
Spaces Towards Stability in Adversarial Training [0.6008132390640294]
Training neural networks that require adversarial optimization, such as generative adversarial networks (GANs), suffers from instability.
In this study, we tackle this problem theoretically through a functional analysis.
arXiv Detail & Related papers (2023-12-02T01:15:57Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Exact Asymptotics for Linear Quadratic Adaptive Control [6.287145010885044]
We study the simplest non-bandit reinforcement learning problem: linear quadratic control (LQAC)
We derive expressions for the regret, estimation error, and prediction error of a stepwise-updating LQAC algorithm.
In simulations on both stable and unstable systems, we find that our theory also describes the algorithm's finite-sample behavior remarkably well.
arXiv Detail & Related papers (2020-11-02T22:43:30Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z) - The empirical duality gap of constrained statistical learning [115.23598260228587]
We study the study of constrained statistical learning problems, the unconstrained version of which are at the core of virtually all modern information processing.
We propose to tackle the constrained statistical problem overcoming its infinite dimensionality, unknown distributions, and constraints by leveraging finite dimensional parameterizations, sample averages, and duality theory.
We demonstrate the effectiveness and usefulness of this constrained formulation in a fair learning application.
arXiv Detail & Related papers (2020-02-12T19:12:29Z) - Lagrangian Duality for Constrained Deep Learning [51.2216183850835]
This paper explores the potential of Lagrangian duality for learning applications that feature complex constraints.
In energy domains, the combination of Lagrangian duality and deep learning can be used to obtain state-of-the-art results.
In transprecision computing, Lagrangian duality can complement deep learning to impose monotonicity constraints on the predictor.
arXiv Detail & Related papers (2020-01-26T03:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.