Sparse Approximate Solutions to Max-Plus Equations with Application to
Multivariate Convex Regression
- URL: http://arxiv.org/abs/2011.04468v2
- Date: Mon, 21 Dec 2020 17:54:22 GMT
- Title: Sparse Approximate Solutions to Max-Plus Equations with Application to
Multivariate Convex Regression
- Authors: Nikos Tsilivis, Anastasios Tsiamis, Petros Maragos
- Abstract summary: We show how one can obtain such solutions efficiently and in minimum time for any $ell_p$ approximation error.
We propose a novel method for piecewise fitting of convex functions, with optimality guarantees and an approximately sparse affine regions.
- Score: 34.99564569478268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we study the problem of finding approximate, with minimum
support set, solutions to matrix max-plus equations, which we call sparse
approximate solutions. We show how one can obtain such solutions efficiently
and in polynomial time for any $\ell_p$ approximation error. Based on these
results, we propose a novel method for piecewise-linear fitting of convex
multivariate functions, with optimality guarantees for the model parameters and
an approximately minimum number of affine regions.
Related papers
- Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - Moreau Envelope ADMM for Decentralized Weakly Convex Optimization [55.2289666758254]
This paper proposes a proximal variant of the alternating direction method of multipliers (ADMM) for distributed optimization.
The results of our numerical experiments indicate that our method is faster and more robust than widely-used approaches.
arXiv Detail & Related papers (2023-08-31T14:16:30Z) - Convex mixed-integer optimization with Frank-Wolfe methods [20.37026309402396]
Mixed-integer nonlinear optimization presents both theoretical and computational challenges.
We propose a new type of method to solve these problems based on a branch-and-bound algorithm with convex node relaxations.
arXiv Detail & Related papers (2022-08-23T14:46:54Z) - Automated differential equation solver based on the parametric
approximation optimization [77.34726150561087]
The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
arXiv Detail & Related papers (2022-05-11T10:06:47Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Decomposable Submodular Function Minimization via Maximum Flow [6.993741009069205]
This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization.
We provide improved running times for this problem by reducing it to a number of calls to maximum flow oracle.
arXiv Detail & Related papers (2021-03-05T18:46:38Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Variational Optimization for the Submodular Maximum Coverage Problem [11.734438054316147]
We provide the first variational approximation for this problem based on the Nemhauser divergence.
We empirically evaluate it on a number of public data sets and several application tasks.
arXiv Detail & Related papers (2020-06-10T00:50:25Z) - Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion
and Strong Solutions to Variational Inequalities [14.848525762485872]
We leverage the connections between nonexpansive maps, monotone Lipschitz operators, and proximal mappings to obtain near-optimal solutions to monotone inclusion problems.
These results translate into near-optimal guarantees for approximating strong solutions to variational inequality problems, approximating convex-concave min-max optimization problems, and minimizing the norm of the gradient in min-max optimization problems.
arXiv Detail & Related papers (2020-02-20T17:12:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.