Assessing the Reliability of Deep Learning Classifiers Through
Robustness Evaluation and Operational Profiles
- URL: http://arxiv.org/abs/2106.01258v1
- Date: Wed, 2 Jun 2021 16:10:46 GMT
- Title: Assessing the Reliability of Deep Learning Classifiers Through
Robustness Evaluation and Operational Profiles
- Authors: Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven
Schewe, Xiaowei Huang
- Abstract summary: We present a model-agnostic reliability assessment method for Deep Learning (DL) classifiers.
We partition the input space into small cells and then "assemble" their robustness (to the ground truth) according to the operational profile (OP) of a given application.
Reliability estimates in terms of the probability of misclassification per input (pmi) can be derived together with confidence levels.
- Score: 13.31639740011618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The utilisation of Deep Learning (DL) is advancing into increasingly more
sophisticated applications. While it shows great potential to provide
transformational capabilities, DL also raises new challenges regarding its
reliability in critical functions. In this paper, we present a model-agnostic
reliability assessment method for DL classifiers, based on evidence from
robustness evaluation and the operational profile (OP) of a given application.
We partition the input space into small cells and then "assemble" their
robustness (to the ground truth) according to the OP, where estimators on the
cells' robustness and OPs are provided. Reliability estimates in terms of the
probability of misclassification per input (pmi) can be derived together with
confidence levels. A prototype tool is demonstrated with simplified case
studies. Model assumptions and extension to real-world applications are also
discussed. While our model easily uncovers the inherent difficulties of
assessing the DL dependability (e.g. lack of data with ground truth and
scalability issues), we provide preliminary/compromised solutions to advance in
this research direction.
Related papers
- Explainability of Point Cloud Neural Networks Using SMILE: Statistical Model-Agnostic Interpretability with Local Explanations [0.0]
This study explores the implementation of SMILE, a novel explainability method originally designed for deep neural networks, on point cloud-based models.
The approach demonstrates superior performance in terms of fidelity loss, R2 scores, and robustness across various kernel widths, perturbation numbers, and clustering configurations.
The study further identifies dataset biases in the classification of the 'person' category, emphasizing the necessity for more comprehensive datasets in safety-critical applications.
arXiv Detail & Related papers (2024-10-20T12:13:59Z) - SeMOPO: Learning High-quality Model and Policy from Low-quality Offline Visual Datasets [32.496818080222646]
We propose a new approach to model-based offline reinforcement learning.
We provide a theoretical guarantee of model uncertainty and performance bound of SeMOPO.
Experimental results show that our method substantially outperforms all baseline methods.
arXiv Detail & Related papers (2024-06-13T15:16:38Z) - Towards Precise Observations of Neural Model Robustness in Classification [2.127049691404299]
In deep learning applications, robustness measures the ability of neural models that handle slight changes in input data.
Our approach contributes to a deeper understanding of model robustness in safety-critical applications.
arXiv Detail & Related papers (2024-04-25T09:37:44Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Improving the Adversarial Robustness of NLP Models by Information
Bottleneck [112.44039792098579]
Non-robust features can be easily manipulated by adversaries to fool NLP models.
In this study, we explore the feasibility of capturing task-specific robust features, while eliminating the non-robust ones by using the information bottleneck theory.
We show that the models trained with our information bottleneck-based method are able to achieve a significant improvement in robust accuracy.
arXiv Detail & Related papers (2022-06-11T12:12:20Z) - A Survey on Uncertainty Toolkits for Deep Learning [3.113304966059062]
We present the first survey on toolkits for uncertainty estimation in deep learning (DL)
We investigate 11 toolkits with respect to modeling and evaluation capabilities.
While the first two provide a large degree of flexibility and seamless integration into their respective framework, the last one has the larger methodological scope.
arXiv Detail & Related papers (2022-05-02T17:23:06Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.