Analysis and Design of Quadratic Neural Networks for Regression,
Classification, and Lyapunov Control of Dynamical Systems
- URL: http://arxiv.org/abs/2207.13120v1
- Date: Tue, 26 Jul 2022 18:10:05 GMT
- Title: Analysis and Design of Quadratic Neural Networks for Regression,
Classification, and Lyapunov Control of Dynamical Systems
- Authors: Luis Rodrigues and Sidney Givigi
- Abstract summary: This paper addresses the analysis and design of quadratic neural networks.
Networks offer several advantages, the most important of which are the fact that the architecture is a by-product of the design and is not determined a-priori.
Several examples will show the effectiveness of quadratic neural networks in applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper addresses the analysis and design of quadratic neural networks,
which have been recently introduced in the literature, and their applications
to regression, classification, system identification and control of dynamical
systems. These networks offer several advantages, the most important of which
are the fact that the architecture is a by-product of the design and is not
determined a-priori, their training can be done by solving a convex
optimization problem so that the global optimum of the weights is achieved, and
the input-output mapping can be expressed analytically by a quadratic form. It
also appears from several examples that these networks work extremely well
using only a small fraction of the training data. The results in the paper cast
regression, classification, system identification, stability and control design
as convex optimization problems, which can be solved efficiently with
polynomial-time algorithms to a global optimum. Several examples will show the
effectiveness of quadratic neural networks in applications.
Related papers
- Least Squares Training of Quadratic Convolutional Neural Networks with Applications to System Theory [0.0]
This paper provides a least squares formulation for the training of a 2-layer convolutional neural network.
An analytic expression for the globally optimal weights is obtained alongside a quadratic input-output equation for the network.
arXiv Detail & Related papers (2024-11-13T00:42:40Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Graph Reinforcement Learning for Network Control via Bi-Level
Optimization [37.00510744883984]
We argue that data-driven strategies can automate this process and learn efficient algorithms without compromising optimality.
We present network control problems through the lens of reinforcement learning and propose a graph network-based framework to handle a broad class of problems.
arXiv Detail & Related papers (2023-05-16T03:20:22Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Subquadratic Overparameterization for Shallow Neural Networks [60.721751363271146]
We provide an analytical framework that allows us to adopt standard neural training strategies.
We achieve the desiderata viaak-Lojasiewicz, smoothness, and standard assumptions.
arXiv Detail & Related papers (2021-11-02T20:24:01Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Efficient and Sparse Neural Networks by Pruning Weights in a
Multiobjective Learning Approach [0.0]
We propose a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions.
Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.
arXiv Detail & Related papers (2020-08-31T13:28:03Z) - DynNet: Physics-based neural architecture design for linear and
nonlinear structural response modeling and prediction [2.572404739180802]
In this study, a physics-based recurrent neural network model is designed that is able to learn the dynamics of linear and nonlinear multiple degrees of freedom systems.
The model is able to estimate a complete set of responses, including displacement, velocity, acceleration, and internal forces.
arXiv Detail & Related papers (2020-07-03T17:05:35Z) - On the Difficulty of Designing Processor Arrays for Deep Neural Networks [0.0]
Camuy is a lightweight model of a weight-stationary systolic array for linear algebra operations.
We present an analysis of popular models to illustrate how it can estimate required cycles, data movement costs, as well as systolic array utilization.
arXiv Detail & Related papers (2020-06-24T19:24:08Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.