Approximation Depth of Convex Polytopes
- URL: http://arxiv.org/abs/2507.07779v1
- Date: Thu, 10 Jul 2025 13:58:55 GMT
- Title: Approximation Depth of Convex Polytopes
- Authors: Egor Bakaev, Florestan Brunck, Amir Yehudayoff,
- Abstract summary: We study the ability to approximate a target polytope by polytopes of a given depth.<n>Our main results imply that simplices can only be additive sums.
- Score: 1.6180992915701702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study approximations of polytopes in the standard model for computing polytopes using Minkowski sums and (convex hulls of) unions. Specifically, we study the ability to approximate a target polytope by polytopes of a given depth. Our main results imply that simplices can only be ``trivially approximated''. On the way, we obtain a characterization of simplices as the only ``outer additive'' convex bodies.
Related papers
- Linear Convergence of the Frank-Wolfe Algorithm over Product Polytopes [20.7580525897407]
We study the linear convergence of Frank-Wolfe algorithms over product polytopes.<n>For convex objectives that are $mu$-Polyak-Lojasiewicz, we show linear convergence rates quantified in terms of the resulting condition numbers.
arXiv Detail & Related papers (2025-05-16T13:50:55Z) - Flows on convex polytopes [0.0]
We show that any full-dimensional polytope is homeomorphic to a unit ball, and our approach harnesses flows defined on the ball, mapping them back to the original polytope.<n>Our experiments take inspiration from applications in metabolic flux analysis and demonstrate that our methods achieve competitive density estimation, sampling accuracy, as well as fast training and inference times.
arXiv Detail & Related papers (2025-03-13T10:15:40Z) - Neural Polytopes [0.0]
We find that simple neural networks with ReLU activation generate polytopes as an approximation of a unit sphere in various dimensions.
For a variety of activation functions, generalization of polytopes is obtained, which we call neural polytopes.
arXiv Detail & Related papers (2023-07-03T03:00:22Z) - Deep ReLU Networks Have Surprisingly Simple Polytopes [30.417072051872726]
A ReLU network is a piecewise linear function over polytopes.
We study the shapes of polytopes via the number of faces of the polytope.
By characterizing the shape of polytopes, the number of faces can be a novel leverage for other problems.
arXiv Detail & Related papers (2023-05-16T03:51:34Z) - Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice
Polytopes [3.0079490585515343]
We show that $lceillog_(n)rceil$ hidden layers are indeed necessary to compute the maximum of $n$ numbers.
Our results are based on the known duality between neural networks and Newton polytopes via tropical geometry.
arXiv Detail & Related papers (2023-02-24T10:14:53Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - Maximum Entropy Reinforcement Learning with Mixture Policies [54.291331971813364]
We construct a tractable approximation of the mixture entropy using MaxEnt algorithms.
We show that it is closely related to the sum of marginal entropies.
We derive an algorithmic variant of Soft Actor-Critic (SAC) to the mixture policy case and evaluate it on a series of continuous control tasks.
arXiv Detail & Related papers (2021-03-18T11:23:39Z) - Subgroup-based Rank-1 Lattice Quasi-Monte Carlo [51.877197074344586]
We propose a simple closed-form rank-1 lattice construction method based on group theory.
Our method achieves superior approximation performance on benchmark integration test problems and kernel approximation problems.
arXiv Detail & Related papers (2020-10-29T03:42:30Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Average-case Complexity of Teaching Convex Polytopes via Halfspace
Queries [55.28642461328172]
We show that the average-case teaching complexity is $Theta(d)$, which is in sharp contrast to the worst-case teaching complexity of $Theta(n)$.
Our insights allow us to establish a tight bound on the average-case complexity for $phi$-separable dichotomies.
arXiv Detail & Related papers (2020-06-25T19:59:24Z) - SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm [64.13217062232874]
SURF is an algorithm for approximating distributions by piecewises.
It outperforms state-of-the-art algorithms in experiments.
arXiv Detail & Related papers (2020-02-22T01:03:33Z) - Complexity Guarantees for Polyak Steps with Momentum [76.97851351276165]
We study a class of methods, based on Polyak steps, where this knowledge is substituted by that of the optimal value, $f_*$.
We first show slightly improved convergence bounds than previously known for the classical case of simple gradient descent with Polyak steps, we then derive an accelerated gradient method with Polyak steps and momentum, along with convergence guarantees.
arXiv Detail & Related papers (2020-02-03T17:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.