Between steps: Intermediate relaxations between big-M and convex hull
formulations
- URL: http://arxiv.org/abs/2101.12708v1
- Date: Fri, 29 Jan 2021 18:03:15 GMT
- Title: Between steps: Intermediate relaxations between big-M and convex hull
formulations
- Authors: Jan Kronqvist and Ruth Misener and Calvin Tsay
- Abstract summary: We develop a class of relaxations in between the big-M and convex hull formulations of disjunctions.
The proposed "P-split" formulations split convex additively separable constraints into P partitions and form the convex hull of the disjuncts.
- Score: 4.769747792846004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work develops a class of relaxations in between the big-M and convex
hull formulations of disjunctions, drawing advantages from both. The proposed
"P-split" formulations split convex additively separable constraints into P
partitions and form the convex hull of the partitioned disjuncts. Parameter P
represents the trade-off of model size vs. relaxation strength. We examine the
novel formulations and prove that, under certain assumptions, the relaxations
form a hierarchy starting from a big-M equivalent and converging to the convex
hull. We computationally compare the proposed formulations to big-M and convex
hull formulations on a test set including: K-means clustering, P_ball problems,
and ReLU neural networks. The computational results show that the intermediate
P-split formulations can form strong outer approximations of the convex hull
with fewer variables and constraints than the extended convex hull
formulations, giving significant computational advantages over both the big-M
and convex hull.
Related papers
- Tightening convex relaxations of trained neural networks: a unified approach for convex and S-shaped activations [0.0]
We develop a formula that yields a tight convexification for the composition of an activation with an affine function.
Our approach can be used to efficiently compute separating hyperplanes or determine that none exists in various settings.
arXiv Detail & Related papers (2024-10-30T18:09:53Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - P-split formulations: A class of intermediate formulations between big-M and convex hull for disjunctive constraints [6.258016078705562]
"P-split" formulations are derived for disjunctive constraints with convex constraints within each disjuct.
We show that the P-split formulations form a hierarchy starting on a big-M equivalent and converging to the convex hull.
We computationally compare the P-split formulations against big-M and convex hull formulations on 344 test instances.
arXiv Detail & Related papers (2022-02-10T18:02:16Z) - Partition-based formulations for mixed-integer optimization of trained
ReLU neural networks [66.88252321870085]
This paper introduces a class of mixed-integer formulations for trained ReLU neural networks.
At one extreme, one partition per input recovers the convex hull of a node, i.e., the tightest possible formulation for each node.
arXiv Detail & Related papers (2021-02-08T17:27:34Z) - On the minmax regret for statistical manifolds: the role of curvature [68.8204255655161]
Two-part codes and the minimum description length have been successful in delivering procedures to single out the best models.
We derive a sharper expression than the standard one given by the complexity, where the scalar curvature of the Fisher information metric plays a dominant role.
arXiv Detail & Related papers (2020-07-06T17:28:19Z) - Model Reduction and Neural Networks for Parametric PDEs [9.405458160620533]
We develop a framework for data-driven approximation of input-output maps between infinite-dimensional spaces.
The proposed approach is motivated by the recent successes of neural networks and deep learning.
For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology.
arXiv Detail & Related papers (2020-05-07T00:09:27Z) - Stochastic spectral embedding [0.0]
We propose a novel sequential adaptive surrogate modeling method based on "stochastic spectral embedding" (SSE)
We show how the method compares favorably against state-of-the-art sparse chaos expansions on a set of models with different complexity and input dimension.
arXiv Detail & Related papers (2020-04-09T11:00:07Z) - MINA: Convex Mixed-Integer Programming for Non-Rigid Shape Alignment [77.38594866794429]
convex mixed-integer programming formulation for non-rigid shape matching.
We propose a novel shape deformation model based on an efficient low-dimensional discrete model.
arXiv Detail & Related papers (2020-02-28T09:54:06Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.