Ideal formulations for constrained convex optimization problems with
indicator variables
- URL: http://arxiv.org/abs/2007.00107v2
- Date: Wed, 16 Jun 2021 02:34:08 GMT
- Title: Ideal formulations for constrained convex optimization problems with
indicator variables
- Authors: Linchuan Wei, Andres Gomez, Simge Kucukyavuz
- Abstract summary: We study the convexification of a class of convex optimization problems with indicator variables and constraints on the indicators.
Unlike most of the previous work on convexification of sparse regression problems, we simultaneously consider the nonlinear non-separable objective, indicator variables, and constraints.
We derive ideal convexifications for problems with hierarchy, multi-collinearity, and sparsity constraints.
- Score: 2.578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by modern regression applications, in this paper, we study the
convexification of a class of convex optimization problems with indicator
variables and combinatorial constraints on the indicators. Unlike most of the
previous work on convexification of sparse regression problems, we
simultaneously consider the nonlinear non-separable objective, indicator
variables, and combinatorial constraints. Specifically, we give the convex hull
description of the epigraph of the composition of a one-dimensional convex
function and an affine function under arbitrary combinatorial constraints. As
special cases of this result, we derive ideal convexifications for problems
with hierarchy, multi-collinearity, and sparsity constraints. Moreover, we also
give a short proof that for a separable objective function, the perspective
reformulation is ideal independent from the constraints of the problem. Our
computational experiments with regression problems under hierarchy constraints
on real datasets demonstrate the potential of the proposed approach in
improving the relaxation quality without significant computational overhead.
Related papers
- Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization [34.628967272528044]
This paper investigates projection-free algorithms for constrained multi-level optimization functions.
By using a stage-wise adaptation, we obtain complexities for convex and strongly convex functions.
arXiv Detail & Related papers (2024-06-06T06:56:56Z) - Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - Constrained Optimization of Rank-One Functions with Indicator Variables [0.0]
optimization problems involving a rank-one convex function over constraints modeling restrictions on the support of the decision variables emerge in various machine learning applications.
We propose a constructive approach that exploits a hidden conic structure induced by perspective functions.
This enables us to systematically give perspective formulations for the convex hull descriptions of sets with nonlinear separable or non-separable objective functions, sign constraints on continuous variables, and constraints on indicator variables.
arXiv Detail & Related papers (2023-03-31T15:51:56Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - On Solution Functions of Optimization: Universal Approximation and
Covering Number Bounds [6.3291148076593355]
We study the expressability of convex optimization functions solution functions of linear targetibility(1) (LP) and approximable approximation power of QP.
Our results provide the emph we model for rigorous analysis of the properties of some restricted programming and implications for algorithmic design and performance guarantees.
arXiv Detail & Related papers (2022-12-02T17:16:04Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.