Neural networks for neurocomputing circuits: a computational study of tolerance to noise and activation function non-uniformity when machine learning materials properties
- URL: http://arxiv.org/abs/2510.17849v1
- Date: Mon, 13 Oct 2025 08:27:08 GMT
- Title: Neural networks for neurocomputing circuits: a computational study of tolerance to noise and activation function non-uniformity when machine learning materials properties
- Authors: Ye min Thant, Methawee Nukunudompanich, Chu-Chen Chueh, Manabu Ihara, Sergei Manzhos,
- Abstract summary: We present a study of the impact of circuit noise and NAF inhomogeneity in function of NN architecture and training regimes.<n>We show that NNs generally possess low noise tolerance with the model accuracy rapidly degrading with noise level.<n>We demonstrate that the effect of activation function inhomogeneity can be palliated by retraining the NN using practically realized shapes of NAFs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dedicated analog neurocomputing circuits are promising for high-throughput, low power consumption applications of machine learning (ML) and for applications where implementing a digital computer is unwieldy (remote locations; small, mobile, and autonomous devices, extreme conditions, etc.). Neural networks (NN) implemented in such circuits, however, must contend with circuit noise and the non-uniform shapes of the neuron activation function (NAF) due to the dispersion of performance characteristics of circuit elements (such as transistors or diodes implementing the neurons). We present a computational study of the impact of circuit noise and NAF inhomogeneity in function of NN architecture and training regimes. We focus on one application that requires high-throughput ML: materials informatics, using as representative problem ML of formation energies vs. lowest-energy isomer of peri-condensed hydrocarbons, formation energies and band gaps of double perovskites, and zero point vibrational energies of molecules from QM9 dataset. We show that NNs generally possess low noise tolerance with the model accuracy rapidly degrading with noise level. Single-hidden layer NNs, and NNs with larger-than-optimal sizes are somewhat more noise-tolerant. Models that show less overfitting (not necessarily the lowest test set error) are more noise-tolerant. Importantly, we demonstrate that the effect of activation function inhomogeneity can be palliated by retraining the NN using practically realized shapes of NAFs.
Related papers
- A Tensor Residual Circuit Neural Network Factorized with Matrix Product Operation [0.0]
We propose a novel tensor circuit neural network (TCNN) that takes advantage of the characteristics of tensor neural networks and residual circuit models.<n>The proposed activation operation and parallelism of the circuit in complex number field improves its non-linearity and efficiency for feature learning.<n> Experimental results confirm that TCNN showcases more outstanding generalization and robustness with its average accuracies on various datasets 2%-3% higher than those of the state-of-the-art compared models.
arXiv Detail & Related papers (2025-11-12T13:24:02Z) - Neuromorphic Quantum Neural Networks with Tunnel-Diode Activation Functions [0.0]
Tunnel diodes are well-known electronic components that utilise the physical effect of quantum tunnelling (QT)<n>We propose using the current voltage characteristic of a tunnel diode as a novel, physics-based activation function for neural networks.<n>We demonstrate that the tunnel-diode activation function (TDAF) outperforms traditional activation functions in terms of accuracy and loss during both training and evaluation.
arXiv Detail & Related papers (2025-03-06T21:14:23Z) - Noise-resistant adaptive Hamiltonian learning [30.632260870411177]
An adaptive Hamiltonian learning (AHL) model for data analysis and quantum state simulation is proposed to overcome problems such as low efficiency.<n>A noise-resistant quantum neural network (RQNN) based on AHL is developed, which improves the noise robustness of the quantum neural network.
arXiv Detail & Related papers (2025-01-14T11:12:59Z) - Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
Neuromorphic computing uses spiking neural networks (SNNs) to perform inference tasks.<n> embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption.<n> split computing - where an SNN is partitioned across two devices - is a promising solution.<n>This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - sVAD: A Robust, Low-Power, and Light-Weight Voice Activity Detection
with Spiking Neural Networks [51.516451451719654]
Spiking Neural Networks (SNNs) are known to be biologically plausible and power-efficient.
This paper introduces a novel SNN-based Voice Activity Detection model, referred to as sVAD.
It provides effective auditory feature representation through SincNet and 1D convolution, and improves noise robustness with attention mechanisms.
arXiv Detail & Related papers (2024-03-09T02:55:44Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Uncertainty quantification for noisy inputs-outputs in physics-informed
neural networks and neural operators [2.07180164747172]
We introduce a Bayesian approach to quantify uncertainty arising from noisy inputs-outputs in neural networks (PINNs) and neural operators (NOs)
PINNs incorporate physics by including physics-informed terms via automatic differentiation, either in the loss function or the likelihood, and often take as input the spatial-temporal coordinate.
We show that this approach can be seamlessly integrated into PINNs and NOs, when they are employed to encode the physical information.
arXiv Detail & Related papers (2023-11-19T08:18:26Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Stochastic Domain Wall-Magnetic Tunnel Junction Artificial Neurons for
Noise-Resilient Spiking Neural Networks [0.0]
We present a scaled DW-MTJ neuron with voltage-dependent probability firing.
validation accuracy during training was also shown to be comparable to an ideal integrate and fire device.
This work shows that DW-MTJ devices can be used to construct noise-resilient networks suitable for neuromorphic computing on the edge.
arXiv Detail & Related papers (2023-04-10T18:00:26Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of
Neural Networks [3.2242513084255036]
QUANOS is a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS)
Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness.
arXiv Detail & Related papers (2020-04-22T15:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.