$γ$-weakly $θ$-up-concavity: Linearizable Non-Convex Optimization with Applications to DR-Submodular and OSS Functions
- URL: http://arxiv.org/abs/2602.13506v1
- Date: Fri, 13 Feb 2026 22:34:44 GMT
- Title: $γ$-weakly $θ$-up-concavity: Linearizable Non-Convex Optimization with Applications to DR-Submodular and OSS Functions
- Authors: Mohammad Pedramfar, Vaneet Aggarwal,
- Abstract summary: We introduce and $$-weakly $$-up-concavity, a novel first-orderizing condition that characterizes a broad approximation of such functions.<n>Our framework recovers the optimal coefficient for DR-submodular class and significantly improves existing approximation coefficients.
- Score: 52.031993908548294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimizing monotone non-convex functions is a fundamental challenge across machine learning and combinatorial optimization. We introduce and study $γ$-weakly $θ$-up-concavity, a novel first-order condition that characterizes a broad class of such functions. This condition provides a powerful unifying framework, strictly generalizing both DR-submodular functions and One-Sided Smooth (OSS) functions. Our central theoretical contribution demonstrates that $γ$-weakly $θ$-up-concave functions are upper-linearizable: for any feasible point, we can construct a linear surrogate whose gains provably approximate the original non-linear objective. This approximation holds up to a constant factor, namely the approximation coefficient, dependent solely on $γ$, $θ$, and the geometry of the feasible set. This linearizability yields immediate and unified approximation guarantees for a wide range of problems. Specifically, we obtain unified approximation guarantees for offline optimization as well as static and dynamic regret bounds in online settings via standard reductions to linear optimization. Moreover, our framework recovers the optimal approximation coefficient for DR-submodular maximization and significantly improves existing approximation coefficients for OSS optimization, particularly over matroid constraints.
Related papers
- Gradient-free stochastic optimization for additive models [50.57026826740147]
We address the problem of zero-order optimization from noisy observations for an objective function satisfying the Polyak-Lojasiewicz or the strong convexity condition.<n>We assume that the objective function has an additive structure and satisfies a higher-order smoothness property, characterized by the H"older family of functions.
arXiv Detail & Related papers (2025-03-03T23:39:08Z) - A Novel Unified Parametric Assumption for Nonconvex Optimization [53.943470475510196]
Non optimization is central to machine learning, but the general framework non convexity enables weak convergence guarantees too pessimistic compared to the other hand.<n>We introduce a novel unified assumption in non convex algorithms.
arXiv Detail & Related papers (2025-02-17T21:25:31Z) - Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency [52.60557300927007]
We present a $textbfMA-OSMA$ algorithm to transfer the discrete submodular problem into a continuous optimization.<n>We also introduce a projection-free $textbfMA-OSEA$ algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution.<n>Our algorithms significantly improve the $(frac11+c)$-approximation provided by the state-of-the-art OSG algorithm.
arXiv Detail & Related papers (2025-02-07T15:57:56Z) - Decentralized Projection-free Online Upper-Linearizable Optimization with Applications to DR-Submodular Optimization [29.705337940879705]
We introduce a novel framework for decentralized projection-free optimization.<n>We leverage decentralized optimization techniques with the flexibility of upper-linearizable function frameworks.
arXiv Detail & Related papers (2025-01-30T07:28:34Z) - From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization [33.38582292895673]
This paper introduces the notion of concavity and DR-submodularity in various settings, including monotone non-linear and DR-submodularity.
A general meta-algorithmm converts linear/quadratic into ones that optimize upper-linear/quadratizable functions.
arXiv Detail & Related papers (2024-04-27T06:19:30Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Non-local Optimization: Imposing Structure on Optimization Problems by
Relaxation [0.0]
In evolutionary computation and reinforcement learning, the optimization of a function $f: Omega to mathbbR$ is often addressed through optimizing a so-called relaxation $theta in Theta mapsto mathbbE_theta(f)$ of $f$.
We investigate the structure of such relaxations by means of measure theory and Fourier analysis, enabling us to shed light on the success of many associated optimization methods.
arXiv Detail & Related papers (2020-11-11T20:45:47Z) - Optimal Approximation -- Smoothness Tradeoffs for Soft-Max Functions [73.33961743410876]
A soft-max function has two main efficiency measures: approximation and smoothness.
We identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness.
This leads to novel soft-max functions, each of which is optimal for a different application.
arXiv Detail & Related papers (2020-10-22T05:19:58Z) - Stochastic Coordinate Minimization with Progressive Precision for
Stochastic Convex Optimization [16.0251555430107]
A framework based on iterative coordinate minimization (CM) is developed for convex optimization.
We establish the optimal precision control and the resulting order-optimal regret performance.
The proposed algorithm is amenable to online implementation and inherits the scalability and parallelizability properties of CM for large-scale optimization.
arXiv Detail & Related papers (2020-03-11T18:42:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.