Optimal Sets and Solution Paths of ReLU Networks
- URL: http://arxiv.org/abs/2306.00119v2
- Date: Fri, 19 Jan 2024 18:30:27 GMT
- Title: Optimal Sets and Solution Paths of ReLU Networks
- Authors: Aaron Mishkin, Mert Pilanci
- Abstract summary: We develop an analytical framework to characterize the set of optimal ReLU networks.
We establish conditions for the neuralization of ReLU networks to be continuous, and develop sensitivity results for ReLU networks.
- Score: 56.40911684005949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an analytical framework to characterize the set of optimal ReLU
neural networks by reformulating the non-convex training problem as a convex
program. We show that the global optima of the convex parameterization are
given by a polyhedral set and then extend this characterization to the optimal
set of the non-convex training objective. Since all stationary points of the
ReLU training problem can be represented as optima of sub-sampled convex
programs, our work provides a general expression for all critical points of the
non-convex objective. We then leverage our results to provide an optimal
pruning algorithm for computing minimal networks, establish conditions for the
regularization path of ReLU networks to be continuous, and develop sensitivity
results for minimal ReLU networks.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Reverse Engineering Deep ReLU Networks An Optimization-based Algorithm [0.0]
We present a novel method for reconstructing deep ReLU networks by leveraging convex optimization techniques and a sampling-based approach.
Our research contributes to the growing body of work on reverse engineering deep ReLU networks and paves the way for new advancements in neural network interpretability and security.
arXiv Detail & Related papers (2023-12-07T20:15:06Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Composite Optimization Algorithms for Sigmoid Networks [3.160070867400839]
We propose the composite optimization algorithms based on the linearized proximal algorithms and the alternating direction of multipliers.
Numerical experiments on Frank's function fitting show that the proposed algorithms perform satisfactorily robustly.
arXiv Detail & Related papers (2023-03-01T15:30:29Z) - Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model
Classes and Cone Decompositions [41.337814204665364]
We develop algorithms for convex optimization of two-layer neural networks with ReLU activation functions.
We show that convex gated ReLU models obtain data-dependent approximation bounds for the ReLU training problem.
arXiv Detail & Related papers (2022-02-02T23:50:53Z) - Path Regularization: A Convexity and Sparsity Inducing Regularization
for Parallel ReLU Networks [75.33431791218302]
We study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape.
We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases.
arXiv Detail & Related papers (2021-10-18T18:00:36Z) - Sparse Signal Reconstruction for Nonlinear Models via Piecewise Rational
Optimization [27.080837460030583]
We propose a method to reconstruct degraded signals by a nonlinear distortion and at a limited sampling rate.
Our method formulates as a non exact fitting term and a penalization term.
It is shown how to use the problem in terms of the benefits of our simulations.
arXiv Detail & Related papers (2020-10-29T09:05:19Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.