Scalable and Efficient Methods for Uncertainty Estimation and Reduction
in Deep Learning
- URL: http://arxiv.org/abs/2401.07145v1
- Date: Sat, 13 Jan 2024 19:30:34 GMT
- Title: Scalable and Efficient Methods for Uncertainty Estimation and Reduction
in Deep Learning
- Authors: Soyed Tuhin Ahmed
- Abstract summary: This paper explores scalable and efficient methods for uncertainty estimation and reduction in deep learning.
We tackle the inherent uncertainties arising from out-of-distribution inputs and hardware non-idealities.
Our approach encompasses problem-aware training algorithms, novel NN topologies, and hardware co-design solutions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural networks (NNs) can achieved high performance in various fields such as
computer vision, and natural language processing. However, deploying NNs in
resource-constrained safety-critical systems has challenges due to uncertainty
in the prediction caused by out-of-distribution data, and hardware
non-idealities. To address the challenges of deploying NNs in
resource-constrained safety-critical systems, this paper summarizes the (4th
year) PhD thesis work that explores scalable and efficient methods for
uncertainty estimation and reduction in deep learning, with a focus on
Computation-in-Memory (CIM) using emerging resistive non-volatile memories. We
tackle the inherent uncertainties arising from out-of-distribution inputs and
hardware non-idealities, crucial in maintaining functional safety in automated
decision-making systems. Our approach encompasses problem-aware training
algorithms, novel NN topologies, and hardware co-design solutions, including
dropout-based \emph{binary} Bayesian Neural Networks leveraging spintronic
devices and variational inference techniques. These innovations significantly
enhance OOD data detection, inference accuracy, and energy efficiency, thereby
contributing to the reliability and robustness of NN implementations.
Related papers
- Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective [1.474723404975345]
This paper delves into the robustness assessment in embedded Deep Neural Networks (DNNs)
By scrutinizing the layer-by-layer and bit-by-bit sensitivity of various encoder-decoder models to soft errors, this study thoroughly investigates the vulnerability of segmentation DNNs to SEUs.
We propose a set of practical lightweight error mitigation techniques with no memory or computational cost suitable for resource-constrained deployments.
arXiv Detail & Related papers (2024-12-04T18:28:38Z) - Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Edge AI Collaborative Learning: Bayesian Approaches to Uncertainty Estimation [0.0]
We focus on determining confidence levels in learning outcomes considering the spatial variability of data encountered by independent agents.
We implement a 3D environment simulation using the Webots platform to simulate collaborative mapping tasks.
Experiments demonstrate that BNNs can effectively support uncertainty estimation in a distributed learning context.
arXiv Detail & Related papers (2024-10-11T09:20:16Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Efficient Uncertainty Quantification and Reduction for
Over-Parameterized Neural Networks [23.7125322065694]
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine learning models.
We create statistically guaranteed schemes to principally emphcharacterize, and emphremove, the uncertainty of over- parameterized neural networks.
In particular, our approach, based on what we call a procedural-noise-correcting (PNC) predictor, removes the procedural uncertainty by using only emphone auxiliary network that is trained on a suitably labeled dataset.
arXiv Detail & Related papers (2023-06-09T05:15:53Z) - APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors [1.1091582432763736]
Deep Neural Networks (DNNs) in safety-critical applications raise new reliability concerns.
State-of-the-art methods for fault injection by emulation incur a spectrum of time-, design- and control-complexity problems.
APPRAISER is proposed that applies functional approximation for a non-conventional purpose and employs approximate computing errors.
arXiv Detail & Related papers (2023-05-31T10:53:46Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework [13.573645522781712]
Deep neural networks (DNNs) and spiking neural networks (SNNs) offer state-of-the-art results on resource-constrained edge devices.
These systems are required to maintain correct functionality under diverse security and reliability threats.
This paper first discusses existing approaches to address energy efficiency, reliability, and security issues at different system layers.
arXiv Detail & Related papers (2021-09-20T20:22:56Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.