Progressively Sampled Equality-Constrained Optimization
- URL: http://arxiv.org/abs/2510.00417v1
- Date: Wed, 01 Oct 2025 01:58:17 GMT
- Title: Progressively Sampled Equality-Constrained Optimization
- Authors: Frank E. Curtis, Lingjun Guo, Daniel P. Robinson,
- Abstract summary: The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms.<n>The results of numerical experiments with a set of test problems demonstrate that the proposed approach can be effective in practice.
- Score: 4.8718671103376865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which the sample set grows progressively. Under assumptions about the constraint functions and their first- and second-order derivatives that are reasonable in some real-world settings of interest, it is shown that -- with a sufficiently large initial sample -- solving a sequence of problems defined through progressive sampling yields a better worst-case sample complexity bound compared to solving a single problem with a full set of samples. The results of numerical experiments with a set of test problems demonstrate that the proposed approach can be effective in practice.
Related papers
- Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.<n>Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.<n>We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces [47.907236421762626]
This work studies discrete-time discounted Markov decision processes with continuous state and action spaces.
We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem.
arXiv Detail & Related papers (2024-05-24T12:53:07Z) - Primal Methods for Variational Inequality Problems with Functional Constraints [25.261426717550293]
We propose a primal method, termed Constrained Gradient Method (CGM), for addressing functional constrained variational inequality problems.<n>Our algorithms match the complexity of projection-based methods in terms of operator queries for both monotone and strongly monotone settings.
arXiv Detail & Related papers (2024-03-19T16:03:03Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - A Sequential Deep Learning Algorithm for Sampled Mixed-integer
Optimisation Problems [0.3867363075280544]
We introduce and analyse two efficient algorithms for mixed-integer optimisation problems.
We show that both algorithms exhibit finite-time convergence towards the optimal solution.
We establish quantitatively the efficacy of these algorithms by means of three numerical tests.
arXiv Detail & Related papers (2023-01-25T17:10:52Z) - A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic Optimization [2.3814052021083354]
It is assumed that constraint function values and derivatives are available, but only programming approximations of the objective function and its associated derivatives can be computed.
A high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived.
arXiv Detail & Related papers (2023-01-01T21:46:50Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.