Measure theoretic results for approximation by neural networks with
limited weights
- URL: http://arxiv.org/abs/2304.01880v1
- Date: Tue, 4 Apr 2023 15:34:53 GMT
- Title: Measure theoretic results for approximation by neural networks with
limited weights
- Authors: Vugar Ismailov and Ekrem Savas
- Abstract summary: We study approximation properties of single hidden layer neural networks with weights varying on finitely many directions and thresholds from an open interval.
We obtain a necessary and at the same time sufficient measure theoretic condition for density of such networks in the space of continuous functions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study approximation properties of single hidden layer
neural networks with weights varying on finitely many directions and thresholds
from an open interval. We obtain a necessary and at the same time sufficient
measure theoretic condition for density of such networks in the space of
continuous functions. Further, we prove a density result for neural networks
with a specifically constructed activation function and a fixed number of
neurons.
Related papers
- Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Universal Approximation and the Topological Neural Network [0.0]
A topological neural network (TNN) takes data from a Tychonoff topological space instead of the usual finite dimensional space.
A distributional neural network (DNN) that takes Borel measures as data is also introduced.
arXiv Detail & Related papers (2023-05-26T05:28:10Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Understanding Weight Similarity of Neural Networks via Chain
Normalization Rule and Hypothesis-Training-Testing [58.401504709365284]
We present a weight similarity measure that can quantify the weight similarity of non-volution neural networks.
We first normalize the weights of neural networks by a chain normalization rule, which is used to introduce weight-training representation learning.
We extend traditional hypothesis-testing method to validate the hypothesis on the weight similarity of neural networks.
arXiv Detail & Related papers (2022-08-08T19:11:03Z) - Imaging Conductivity from Current Density Magnitude using Neural
Networks [1.8692254863855962]
We develop a neural network based reconstruction technique for imaging the conductivity from the magnitude of the internal current density.
It is observed that the approach enjoys remarkable robustness with respect to the presence of data noise.
arXiv Detail & Related papers (2022-04-05T18:31:03Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - Approximate Bisimulation Relations for Neural Networks and Application
to Assured Neural Network Compression [3.0839245814393728]
We propose a concept of approximate bisimulation relation for feedforward neural networks.
A novel neural network merging method is developed to compute the approximate bisimulation error between two neural networks.
arXiv Detail & Related papers (2022-02-02T16:21:19Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - The Representation Power of Neural Networks: Breaking the Curse of
Dimensionality [0.0]
We prove upper bounds on quantities for shallow and deep neural networks.
We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions.
arXiv Detail & Related papers (2020-12-10T04:44:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.