Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks
- URL: http://arxiv.org/abs/2308.00231v1
- Date: Tue, 1 Aug 2023 02:07:47 GMT
- Title: Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks
- Authors: Sadhana Lolla, Iaroslav Elistratov, Alejandro Perez, Elaheh Ahmadi,
Daniela Rus, Alexander Amini
- Abstract summary: Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
- Score: 142.67349734180445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The modern pervasiveness of large-scale deep neural networks (NNs) is driven
by their extraordinary performance on complex problems but is also plagued by
their sudden, unexpected, and often catastrophic failures, particularly on
challenging scenarios. Existing algorithms that provide risk-awareness to NNs
are complex and ad-hoc. Specifically, these methods require significant
engineering changes, are often developed only for particular settings, and are
not easily composable. Here we present capsa, a framework for extending models
with risk-awareness. Capsa provides a methodology for quantifying multiple
forms of risk and composing different algorithms together to quantify different
risk metrics in parallel. We validate capsa by implementing state-of-the-art
uncertainty estimation algorithms within the capsa framework and benchmarking
them on complex perception datasets. We demonstrate capsa's ability to easily
compose aleatoric uncertainty, epistemic uncertainty, and bias estimation
together in a single procedure, and show how this approach provides a
comprehensive awareness of NN risk.
Related papers
- Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - Echoes of Socratic Doubt: Embracing Uncertainty in Calibrated Evidential Reinforcement Learning [1.7898305876314982]
The proposed algorithm combines deep evidential learning with quantile calibration based on principles of conformal inference.
It is tested on a suite of miniaturized Atari games (i.e., MinAtar)
arXiv Detail & Related papers (2024-02-11T05:17:56Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in
Deep Learning [73.5095051707364]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv Detail & Related papers (2023-09-13T16:33:27Z) - It begins with a boundary: A geometric view on probabilistically robust learning [6.877576704011329]
We take a fresh and geometric view on one such method -- Probabilistically Robust Learning (PRL)
We prove existence of solutions to the original and modified problems using novel relaxation methods.
We also clarify, through a suitable $Gamma$-convergence analysis, the way in which the original and modified PRL models interpolate between risk minimization and adversarial training.
arXiv Detail & Related papers (2023-05-30T06:24:30Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Towards the Quantification of Safety Risks in Deep Neural Networks [9.161046484753841]
In this paper, we define safety risks by requesting the alignment of the network's decision with human perception.
For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists.
In addition to the known adversarial example, reachability example, and invariant example, in this paper we identify a new class of risk - uncertainty example.
arXiv Detail & Related papers (2020-09-13T23:30:09Z) - Entropic Risk Constrained Soft-Robust Policy Optimization [12.362670630646805]
It is important in high-stakes domains to quantify and manage risk induced by model uncertainties.
We propose an entropic risk constrained policy gradient and actor-critic algorithms that are risk-averse to the model uncertainty.
arXiv Detail & Related papers (2020-06-20T23:48:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.