Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time
Algorithms and Adversarial Training
- URL: http://arxiv.org/abs/2201.01965v1
- Date: Thu, 6 Jan 2022 08:24:11 GMT
- Title: Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time
Algorithms and Adversarial Training
- Authors: Yatong Bai, Tanmay Gautam, Somayeh Sojoudi
- Abstract summary: We develop two efficient algorithms that train ANNs with global convergence guarantees.
The first algorithm is based on the alternating method multiplier (ADMM)
The second algorithm, based on the "sampled convex programs" theory, is simpler to implement.
- Score: 12.354076490479516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The non-convexity of the artificial neural network (ANN) training landscape
brings inherent optimization difficulties. While the traditional
back-propagation stochastic gradient descent (SGD) algorithm and its variants
are effective in certain cases, they can become stuck at spurious local minima
and are sensitive to initializations and hyperparameters. Recent work has shown
that the training of an ANN with ReLU activations can be reformulated as a
convex program, bringing hope to globally optimizing interpretable ANNs.
However, naively solving the convex training formulation has an exponential
complexity, and even an approximation heuristic requires cubic time. In this
work, we characterize the quality of this approximation and develop two
efficient algorithms that train ANNs with global convergence guarantees. The
first algorithm is based on the alternating direction method of multiplier
(ADMM). It solves both the exact convex formulation and the approximate
counterpart. Linear global convergence is achieved, and the initial several
iterations often yield a solution with high prediction accuracy. When solving
the approximate formulation, the per-iteration time complexity is quadratic.
The second algorithm, based on the "sampled convex programs" theory, is simpler
to implement. It solves unconstrained convex formulations and converges to an
approximately globally optimal classifier. The non-convexity of the ANN
training landscape exacerbates when adversarial training is considered. We
apply the robust convex optimization theory to convex training and develop
convex formulations that train ANNs robust to adversarial inputs. Our analysis
explicitly focuses on one-hidden-layer fully connected ANNs, but can extend to
more sophisticated architectures.
Related papers
- Faster Adaptive Decentralized Learning Algorithms [24.379734053137597]
We propose a class of faster distributed non decentralized algorithms (i.e. AdaMDOS and AdaMDOF) for adaptive learning and finite-sum optimization.
Some experimental results demonstrate efficiency of our algorithms.
arXiv Detail & Related papers (2024-08-19T08:05:33Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural
Network with Group Sparsity [13.27709100571336]
A leaky ReLU network with a group regularization term has been widely used in the recent years.
We show that there is a lack of approaches to compute a stationary point deterministically.
We propose an inexact augmented Lagrangian algorithm for solving the new model.
arXiv Detail & Related papers (2022-05-11T11:53:15Z) - Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model
Classes and Cone Decompositions [41.337814204665364]
We develop algorithms for convex optimization of two-layer neural networks with ReLU activation functions.
We show that convex gated ReLU models obtain data-dependent approximation bounds for the ReLU training problem.
arXiv Detail & Related papers (2022-02-02T23:50:53Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Neural Spectrahedra and Semidefinite Lifts: Global Convex Optimization
of Polynomial Activation Neural Networks in Fully Polynomial-Time [31.94590517036704]
We develop exact convex optimization formulations for two-layer numerical networks with second degree activations.
We show that semidefinite neural and therefore global optimization is in complexity dimension and sample size for all input data.
The proposed approach is significantly faster to obtain better test accuracy compared to the standard backpropagation procedure.
arXiv Detail & Related papers (2021-01-07T08:43:01Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.