Handling Hard Affine SDP Shape Constraints in RKHSs
- URL: http://arxiv.org/abs/2101.01519v1
- Date: Tue, 5 Jan 2021 14:08:58 GMT
- Title: Handling Hard Affine SDP Shape Constraints in RKHSs
- Authors: Pierre-Cyril Aubin-Frankowski, Zoltan Szabo
- Abstract summary: We propose a unified and modular convex optimization framework to encode hard affine SDP constraints on function derivatives.
We prove the consistency of the proposed scheme and that of its adaptive variant, leveraging geometric properties of vRKHSs.
- Score: 3.8073142980733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shape constraints, such as non-negativity, monotonicity, convexity or
supermodularity, play a key role in various applications of machine learning
and statistics. However, incorporating this side information into predictive
models in a hard way (for example at all points of an interval) for rich
function classes is a notoriously challenging problem. We propose a unified and
modular convex optimization framework, relying on second-order cone (SOC)
tightening, to encode hard affine SDP constraints on function derivatives, for
models belonging to vector-valued reproducing kernel Hilbert spaces (vRKHSs).
The modular nature of the proposed approach allows to simultaneously handle
multiple shape constraints, and to tighten an infinite number of constraints
into finitely many. We prove the consistency of the proposed scheme and that of
its adaptive variant, leveraging geometric properties of vRKHSs. The efficiency
of the approach is illustrated in the context of shape optimization,
safety-critical control and econometrics.
Related papers
- Structured Regularization for Constrained Optimization on the SPD Manifold [1.1126342180866644]
We introduce a class of structured regularizers, based on symmetric gauge functions, which allow for solving constrained optimization on the SPD manifold with faster unconstrained methods.
We show that our structured regularizers can be chosen to preserve or induce desirable structure, in particular convexity and "difference of convex" structure.
arXiv Detail & Related papers (2024-10-12T22:11:22Z) - One-Shot Safety Alignment for Large Language Models via Optimal Dualization [64.52223677468861]
This paper presents a dualization perspective that reduces constrained alignment to an equivalent unconstrained alignment problem.
We do so by pre-optimizing a smooth and convex dual function that has a closed form.
Our strategy leads to two practical algorithms in model-based and preference-based scenarios.
arXiv Detail & Related papers (2024-05-29T22:12:52Z) - Cons-training tensor networks [2.8834278113855896]
We introduce a novel family of tensor networks, termed.
textitconstrained matrix product states (MPS)
These networks incorporate exactly arbitrary discrete linear constraints, including inequalities, into sparse block structures.
These networks are particularly tailored for modeling distributions with support strictly over the feasible space.
arXiv Detail & Related papers (2024-05-15T00:13:18Z) - Shape Arithmetic Expressions: Advancing Scientific Discovery Beyond Closed-Form Equations [56.78271181959529]
Generalized Additive Models (GAMs) can capture non-linear relationships between variables and targets, but they cannot capture intricate feature interactions.
We propose Shape Expressions Arithmetic ( SHAREs) that fuses GAM's flexible shape functions with the complex feature interactions found in mathematical expressions.
We also design a set of rules for constructing SHAREs that guarantee transparency of the found expressions beyond the standard constraints.
arXiv Detail & Related papers (2024-04-15T13:44:01Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Flexible Differentiable Optimization via Model Transformations [1.081463830315253]
We introduce DiffOpt, a Julia library to differentiate through the solution of optimization problems with respect to arbitrary parameters present in the objective and/or constraints.
arXiv Detail & Related papers (2022-06-10T09:59:13Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Efficient semidefinite bounds for multi-label discrete graphical models [6.226454551201676]
One of the main queries on such models is to identify the SDPWCSP Function on Cost of a Posteri (MAP) Networks.
We consider a traditional dualized constraint approach and a dedicated dedicated SDP/Monteiro style method based on row-by-row updates.
arXiv Detail & Related papers (2021-11-24T13:38:34Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Hard Shape-Constrained Kernel Machines [4.94950858749529]
We prove that hard affine shape constraints on function derivatives can be encoded in kernel machines.
We present a tightened second-order cone constrained reformulation, that can be readily implemented in convex solvers.
arXiv Detail & Related papers (2020-05-26T11:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.