A comparison of rational and neural network based approximations
- URL: http://arxiv.org/abs/2303.04436v2
- Date: Thu, 7 Sep 2023 03:25:36 GMT
- Title: A comparison of rational and neural network based approximations
- Authors: Vinesha Peiris, Reinier Diaz Millan, Nadezda Sukhorukova, Julien Ugon
- Abstract summary: We compare the efficiency of function approximation using rational approximation, neural network and their combinations.
It was found that rational approximation is superior to neural network based approaches with the same number of decision variables.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rational and neural network based approximations are efficient tools in
modern approximation. These approaches are able to produce accurate
approximations to nonsmooth and non-Lipschitz functions, including multivariate
domain functions. In this paper we compare the efficiency of function
approximation using rational approximation, neural network and their
combinations. It was found that rational approximation is superior to neural
network based approaches with the same number of decision variables. Our
numerical experiments demonstrate the efficiency of rational approximation,
even when the number of approximation parameters (that is, the dimension of the
corresponding optimisation problems) is small. Another important contribution
of this paper lies in the improvement of rational approximation algorithms.
Namely, the optimisation based algorithms for rational approximation can be
adjusted to in such a way that the conditioning number of the constraint
matrices are controlled. This simple adjustment enables us to work with high
dimension optimisation problems and improve the design of the neural network.
The main strength of neural networks is in their ability to handle models with
a large number of variables: complex models are decomposed in several simple
optimisation problems. Therefore the the large number of decision variables is
in the nature of neural networks.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - The limitation of neural nets for approximation and optimization [0.0]
We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems.
Our study begins by determining the best activation function for approximating the objective functions of popular nonlinear optimization test problems.
arXiv Detail & Related papers (2023-11-21T00:21:15Z) - A new approach to generalisation error of machine learning algorithms:
Estimates and convergence [0.0]
We introduce a new approach to the estimation of the (generalisation) error and to convergence.
Our results include estimates of the error without any structural assumption on the neural networks.
arXiv Detail & Related papers (2023-06-23T20:57:31Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Adaptive neural domain refinement for solving time-dependent
differential equations [0.0]
A classic approach for solving differential equations with neural networks builds upon neural forms, which employ the differential equation with a discretisation of the solution domain.
It would be desirable to transfer such important and successful strategies to the field of neural network based solutions.
We propose a novel adaptive neural approach to meet this aim for solving time-dependent problems.
arXiv Detail & Related papers (2021-12-23T13:19:07Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - Neural Network Approximations of Compositional Functions With
Applications to Dynamical Systems [3.660098145214465]
We develop an approximation theory for compositional functions and their neural network approximations.
We identify a set of key features of compositional functions and the relationship between the features and the complexity of neural networks.
In addition to function approximations, we prove several formulae of error upper bounds for neural networks.
arXiv Detail & Related papers (2020-12-03T04:40:25Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.