Agreeing to Stop: Reliable Latency-Adaptive Decision Making via
Ensembles of Spiking Neural Networks
- URL: http://arxiv.org/abs/2310.16675v2
- Date: Sat, 16 Dec 2023 17:40:13 GMT
- Title: Agreeing to Stop: Reliable Latency-Adaptive Decision Making via
Ensembles of Spiking Neural Networks
- Authors: Jiechen Chen, Sangwoo Park, and Osvaldo Simeone
- Abstract summary: Spiking neural networks (SNNs) are recurrent models that can leverage sparsity in input time series to efficiently carry out tasks such as classification.
We propose to enhance the uncertainty capabilities of SNNs by implementing ensemble models for the purpose of improving the reliability of stopping decisions.
- Score: 36.14499894307206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) are recurrent models that can leverage
sparsity in input time series to efficiently carry out tasks such as
classification. Additional efficiency gains can be obtained if decisions are
taken as early as possible as a function of the complexity of the input time
series. The decision on when to stop inference and produce a decision must rely
on an estimate of the current accuracy of the decision. Prior work demonstrated
the use of conformal prediction (CP) as a principled way to quantify
uncertainty and support adaptive-latency decisions in SNNs. In this paper, we
propose to enhance the uncertainty quantification capabilities of SNNs by
implementing ensemble models for the purpose of improving the reliability of
stopping decisions. Intuitively, an ensemble of multiple models can decide when
to stop more reliably by selecting times at which most models agree that the
current accuracy level is sufficient. The proposed method relies on different
forms of information pooling from ensemble models, and offers theoretical
reliability guarantees. We specifically show that variational inference-based
ensembles with p-variable pooling significantly reduce the average latency of
state-of-the-art methods, while maintaining reliability guarantees.
Related papers
- Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - UPNet: Uncertainty-based Picking Deep Learning Network for Robust First Break Picking [6.380128763476294]
First break (FB) picking is a crucial aspect in the determination of subsurface velocity models.
Deep neural networks (DNNs) have been proposed to accelerate this processing.
We introduce uncertainty quantification into the FB picking task and propose a novel uncertainty-based deep learning network called UPNet.
arXiv Detail & Related papers (2023-05-23T08:13:09Z) - Knowing When to Stop: Delay-Adaptive Spiking Neural Network Classifiers with Reliability Guarantees [36.14499894307206]
Spiking neural networks (SNNs) process time-series data via internal event-driven neural dynamics.
We introduce a novel delay-adaptive SNN-based inference methodology that provides guaranteed reliability for the decisions produced at input-dependent stopping times.
arXiv Detail & Related papers (2023-05-18T22:11:04Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Uncertainty Estimation and Calibration with Finite-State Probabilistic
RNNs [29.84563789289183]
Uncertainty quantification is crucial for building reliable and trustable machine learning systems.
We propose to estimate uncertainty in recurrent neural networks (RNNs) via discrete state transitions over recurrent timesteps.
The uncertainty of the model can be quantified by running a prediction several times, each time sampling from the recurrent state transition distribution.
arXiv Detail & Related papers (2020-11-24T10:35:28Z) - Multi-Loss Sub-Ensembles for Accurate Classification with Uncertainty
Estimation [1.2891210250935146]
We propose an efficient method for uncertainty estimation in deep neural networks (DNNs) achieving high accuracy.
We keep our inference time relatively low by leveraging the advantage proposed by the Deep-Sub-Ensembles method.
Our results show improved accuracy on the classification task and competitive results on several uncertainty measures.
arXiv Detail & Related papers (2020-10-05T10:59:11Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.