Learnability and Robustness of Shallow Neural Networks Learned With a
Performance-Driven BP and a Variant PSO For Edge Decision-Making
- URL: http://arxiv.org/abs/2008.06135v1
- Date: Thu, 13 Aug 2020 23:33:00 GMT
- Title: Learnability and Robustness of Shallow Neural Networks Learned With a
Performance-Driven BP and a Variant PSO For Edge Decision-Making
- Authors: Hongmei He, Mengyuan Chen, Gang Xu, Zhilong Zhu, Zhenhuan Zhu
- Abstract summary: It may not be easy to implement complex AI models in edge devices.
The Universal Approximation Theorem states that a shallow neural network (SNN) can represent any nonlinear function.
Two groups of experiments are conducted to examine the learnability and robustness of SNNs.
- Score: 6.011027400738812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many cases, the computing resources are limited without the benefit from
GPU, especially in the edge devices of IoT enabled systems. It may not be easy
to implement complex AI models in edge devices. The Universal Approximation
Theorem states that a shallow neural network (SNN) can represent any nonlinear
function. However, how fat is an SNN enough to solve a nonlinear
decision-making problem in edge devices? In this paper, we focus on the
learnability and robustness of SNNs, obtained by a greedy tight force heuristic
algorithm (performance driven BP) and a loose force meta-heuristic algorithm (a
variant of PSO). Two groups of experiments are conducted to examine the
learnability and the robustness of SNNs with Sigmoid activation,
learned/optimised by KPI-PDBPs and KPI-VPSOs, where, KPIs (key performance
indicators: error (ERR), accuracy (ACC) and $F_1$ score) are the objectives,
driving the searching process. An incremental approach is applied to examine
the impact of hidden neuron numbers on the performance of SNNs,
learned/optimised by KPI-PDBPs and KPI-VPSOs. From the engineering prospective,
all sensors are well justified for a specific task. Hence, all sensor readings
should be strongly correlated to the target. Therefore, the structure of an SNN
should depend on the dimensions of a problem space. The experimental results
show that the number of hidden neurons up to the dimension number of a problem
space is enough; the learnability of SNNs, produced by KPI-PDBP, is better than
that of SNNs, optimized by KPI-VPSO, regarding the performance and learning
time on the training data sets; the robustness of SNNs learned by KPI-PDBPs and
KPI-VPSOs depends on the data sets; and comparing with other classic machine
learning models, ACC-PDBPs win for almost all tested data sets.
Related papers
- Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Knowing When to Stop: Delay-Adaptive Spiking Neural Network Classifiers with Reliability Guarantees [36.14499894307206]
Spiking neural networks (SNNs) process time-series data via internal event-driven neural dynamics.
We introduce a novel delay-adaptive SNN-based inference methodology that provides guaranteed reliability for the decisions produced at input-dependent stopping times.
arXiv Detail & Related papers (2023-05-18T22:11:04Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Enforcing Continuous Physical Symmetries in Deep Learning Network for
Solving Partial Differential Equations [3.6317085868198467]
We introduce a new method, symmetry-enhanced physics informed neural network (SPINN) where the invariant surface conditions induced by the Lie symmetries of PDEs are embedded into the loss function of PINN.
We show that SPINN performs better than PINN with fewer training points and simpler architecture of neural network.
arXiv Detail & Related papers (2022-06-19T00:44:22Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - Belief Propagation Neural Networks [103.97004780313105]
We introduce belief propagation neural networks (BPNNs)
BPNNs operate on factor graphs and generalize Belief propagation (BP)
We show that BPNNs converges 1.7x faster on Ising models while providing tighter bounds.
On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods.
arXiv Detail & Related papers (2020-07-01T07:39:51Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.