On the Non-Associativity of Analog Computations
- URL: http://arxiv.org/abs/2309.14292v1
- Date: Mon, 25 Sep 2023 17:04:09 GMT
- Title: On the Non-Associativity of Analog Computations
- Authors: Lisa Kuhn and Bernhard Klein and Holger Fr\"oning
- Abstract summary: In this work, we observe that the ordering of input operands of an analog operation also has an impact on the output result.
We conduct a simple test by creating a model of a real analog processor which captures such ordering effects.
The results prove the existence of ordering effects as well as their high impact, as neglecting ordering results in substantial accuracy drops.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The energy efficiency of analog forms of computing makes it one of the most
promising candidates to deploy resource-hungry machine learning tasks on
resource-constrained system such as mobile or embedded devices. However, it is
well known that for analog computations the safety net of discretization is
missing, thus all analog computations are exposed to a variety of imperfections
of corresponding implementations. Examples include non-linearities, saturation
effect and various forms of noise. In this work, we observe that the ordering
of input operands of an analog operation also has an impact on the output
result, which essentially makes analog computations non-associative, even
though the underlying operation might be mathematically associative. We conduct
a simple test by creating a model of a real analog processor which captures
such ordering effects. With this model we assess the importance of ordering by
comparing the test accuracy of a neural network for keyword spotting, which is
trained based either on an ordered model, on a non-ordered variant, and on real
hardware. The results prove the existence of ordering effects as well as their
high impact, as neglecting ordering results in substantial accuracy drops.
Related papers
- Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - Incrementally-Computable Neural Networks: Efficient Inference for
Dynamic Inputs [75.40636935415601]
Deep learning often faces the challenge of efficiently processing dynamic inputs, such as sensor data or user inputs.
We take an incremental computing approach, looking to reuse calculations as the inputs change.
We apply this approach to the transformers architecture, creating an efficient incremental inference algorithm with complexity proportional to the fraction of modified inputs.
arXiv Detail & Related papers (2023-07-27T16:30:27Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Impact of PolSAR pre-processing and balancing methods on complex-valued
neural networks segmentation tasks [9.6556424340252]
We investigate the semantic segmentation of Polarimetric Synthetic Aperture Radar (PolSAR) using Complex-Valued Neural Network (CVNN)
We exhaustively compare both methods for six model architectures, three complex-valued, and their respective real-equivalent models.
We propose two methods for reducing this gap and performing the results for all input representations, models, and dataset pre-processing.
arXiv Detail & Related papers (2022-10-28T12:49:43Z) - Theory and Implementation of Process and Temperature Scalable
Shape-based CMOS Analog Circuits [6.548257506132353]
This work proposes a novel analog computing framework for designing an analog ML processor similar to that of a digital design.
At the core of our work lies shape-based analog computing (S-AC)
S-AC paradigm also allows the user to trade off computational precision with silicon circuit area and power.
arXiv Detail & Related papers (2022-05-11T17:46:01Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - On the Accuracy of Analog Neural Network Inference Accelerators [0.9440010225411358]
Specialized accelerators have recently garnered attention as a method to reduce the power consumption of neural network inference.
This work shows how architectural design decisions, particularly in mapping neural network parameters to analog memory cells, influence inference accuracy.
arXiv Detail & Related papers (2021-09-03T01:38:11Z) - Analysis of Feature Representations for Anomalous Sound Detection [3.4782990087904597]
We evaluate the efficacy of pretrained neural networks as feature extractors for anomalous sound detection.
We leverage the knowledge that is contained in these neural networks to extract semantically rich features.
Our approach is evaluated on recordings from factory machinery such as valves, pumps, sliders and fans.
arXiv Detail & Related papers (2020-12-11T12:31:50Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Machine Learning to Tackle the Challenges of Transient and Soft Errors
in Complex Circuits [0.16311150636417257]
Machine learning models are used to predict accurate per-instance Functional De-Rating data for the full list of circuit instances.
The presented methodology is applied on a practical example and various machine learning models are evaluated and compared.
arXiv Detail & Related papers (2020-02-18T18:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.