Orders-of-coupling representation with a single neural network with
optimal neuron activation functions and without nonlinear parameter
optimization
- URL: http://arxiv.org/abs/2302.12013v1
- Date: Sat, 11 Feb 2023 06:27:26 GMT
- Title: Orders-of-coupling representation with a single neural network with
optimal neuron activation functions and without nonlinear parameter
optimization
- Authors: Sergei Manzhos and Manabu Ihara
- Abstract summary: We show that neural network models of orders-of-coupling representations can be easily built by using a recently proposed neural network with optimal neuron activation functions.
Examples are given of representations of molecular potential energy surfaces.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representations of multivariate functions with low-dimensional functions that
depend on subsets of original coordinates (corresponding of different orders of
coupling) are useful in quantum dynamics and other applications, especially
where integration is needed. Such representations can be conveniently built
with machine learning methods, and previously, methods building the
lower-dimensional terms of such representations with neural networks [e.g.
Comput. Phys. Comm. 180 (2009) 2002] and Gaussian process regressions [e.g.
Mach. Learn. Sci. Technol. 3 (2022) 01LT02] were proposed. Here, we show that
neural network models of orders-of-coupling representations can be easily built
by using a recently proposed neural network with optimal neuron activation
functions computed with a first-order additive Gaussian process regression
[arXiv:2301.05567] and avoiding non-linear parameter optimization. Examples are
given of representations of molecular potential energy surfaces.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - The limitation of neural nets for approximation and optimization [0.0]
We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems.
Our study begins by determining the best activation function for approximating the objective functions of popular nonlinear optimization test problems.
arXiv Detail & Related papers (2023-11-21T00:21:15Z) - Generalizable Neural Fields as Partially Observed Neural Processes [16.202109517569145]
We propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework.
We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.
arXiv Detail & Related papers (2023-09-13T01:22:16Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Neural network with optimal neuron activation functions based on
additive Gaussian process regression [0.0]
More flexible neuron activation functions would allow using fewer neurons and layers and improve expressive power.
We show that additive Gaussian process regression (GPR) can be used to construct optimal neuron activation functions that are individual to each neuron.
An approach is also introduced that avoids non-linear fitting of neural network parameters.
arXiv Detail & Related papers (2023-01-13T14:19:17Z) - Smooth Mathematical Function from Compact Neural Networks [0.0]
We get NNs that generate highly accurate and highly smooth function, which only comprised of a few weight parameters.
New activation function, meta-batch method, features of numerical data, meta-augmentation with meta parameters are presented.
arXiv Detail & Related papers (2022-12-31T11:33:24Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.