Decomposition Polyhedra of Piecewise Linear Functions
- URL: http://arxiv.org/abs/2410.04907v1
- Date: Mon, 7 Oct 2024 10:48:36 GMT
- Title: Decomposition Polyhedra of Piecewise Linear Functions
- Authors: Marie-Charlotte Brandenburg, Moritz Grillo, Christoph Hertrich,
- Abstract summary: We contribute to the frequently studied question of how to decompose a continuous piecewise linear (CPWL) function into a convex CPWL function.
Every CPWL function has infinitely decompositions, but a difference two convex CPWL functions.
We apply our framework to find decompositions with as few linear pieces as possible.
- Score: 4.594829845106234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we contribute to the frequently studied question of how to decompose a continuous piecewise linear (CPWL) function into a difference of two convex CPWL functions. Every CPWL function has infinitely many such decompositions, but for applications in optimization and neural network theory, it is crucial to find decompositions with as few linear pieces as possible. This is a highly challenging problem, as we further demonstrate by disproving a recently proposed approach by Tran and Wang [Minimal representations of tropical rational functions. Algebraic Statistics, 15(1):27-59, 2024]. To make the problem more tractable, we propose to fix an underlying polyhedral complex determining the possible locus of nonlinearity. Under this assumption, we prove that the set of decompositions forms a polyhedron that arises as intersection of two translated cones. We prove that irreducible decompositions correspond to the bounded faces of this polyhedron and minimal solutions must be vertices. We then identify cases with a unique minimal decomposition, and illustrate how our insights have consequences in the theory of submodular functions. Finally, we improve upon previous constructions of neural networks for a given convex CPWL function and apply our framework to obtain results in the nonconvex case.
Related papers
- The polarization hierarchy for polynomial optimization over convex bodies, with applications to nonnegative matrix rank [0.6963971634605796]
We construct a convergent family of outer approximations for the problem of optimizing functions over convex subject bodies to constraints.
A numerical implementation of the third level of the hierarchy is shown to give rise to a very tight approximation for this problem.
arXiv Detail & Related papers (2024-06-13T18:00:09Z) - Bridging the Gap Between General and Down-Closed Convex Sets in
Submodular Maximization [8.225819874406238]
We show that Mualem citemualem23re shows that this approach cannot smooth between down- and non-down-closed constraints.
In this work, we suggest novel offline and online algorithms based on a natural decomposition of the body into two distinct convex bodies.
We also empirically demonstrate the superiority of our proposed algorithms across three offline and two online applications.
arXiv Detail & Related papers (2024-01-17T14:56:42Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - The Implicit Bias of Minima Stability in Multivariate Shallow ReLU
Networks [53.95175206863992]
We study the type of solutions to which gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss.
We prove that although shallow ReLU networks are universal approximators, stable shallow networks are not.
arXiv Detail & Related papers (2023-06-30T09:17:39Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Improved Bounds on Neural Complexity for Representing Piecewise Linear
Functions [23.96475985648844]
A deep neural network using rectified linear units represents a continuous piecewise linear (CPWL) function.
Recent results estimated that the number of neurons needed to exactly represent any CPWL function grows exponentially with the number of pieces or exponentially in terms of the factorial of the number of distinct linear components.
In this paper, we propose much tighter bounds and establish a time algorithm to find a network satisfying these bounds for any given CPWL function.
arXiv Detail & Related papers (2022-10-13T17:58:41Z) - Sparsest Univariate Learning Models Under Lipschitz Constraint [31.28451181040038]
We propose continuous-domain formulations for one-dimensional regression problems.
We control the Lipschitz constant explicitly using a user-defined upper-bound.
We show that both problems admit global minimizers that are continuous and piecewise-linear.
arXiv Detail & Related papers (2021-12-27T07:03:43Z) - Bilinear Classes: A Structural Framework for Provable Generalization in
RL [119.42509700822484]
Bilinear Classes is a new structural framework which permits generalization in reinforcement learning.
The framework incorporates nearly all existing models in which a sample complexity is achievable.
Our main result provides an RL algorithm which has sample complexity for Bilinear Classes.
arXiv Detail & Related papers (2021-03-19T16:34:20Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Ideal formulations for constrained convex optimization problems with
indicator variables [2.578242050187029]
We study the convexification of a class of convex optimization problems with indicator variables and constraints on the indicators.
Unlike most of the previous work on convexification of sparse regression problems, we simultaneously consider the nonlinear non-separable objective, indicator variables, and constraints.
We derive ideal convexifications for problems with hierarchy, multi-collinearity, and sparsity constraints.
arXiv Detail & Related papers (2020-06-30T21:07:10Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.