A Review of Global Sensitivity Analysis Methods and a comparative case study on Digit Classification
- URL: http://arxiv.org/abs/2406.16975v1
- Date: Sun, 23 Jun 2024 00:38:19 GMT
- Title: A Review of Global Sensitivity Analysis Methods and a comparative case study on Digit Classification
- Authors: Zahra Sadeghi, Stan Matwin,
- Abstract summary: Global sensitivity analysis (GSA) aims to detect influential input factors that lead to a model to arrive at a certain decision.
We provide a comprehensive review and a comparison on global sensitivity analysis methods.
- Score: 5.458813674116228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Global sensitivity analysis (GSA) aims to detect influential input factors that lead a model to arrive at a certain decision and is a significant approach for mitigating the computational burden of processing high dimensional data. In this paper, we provide a comprehensive review and a comparison on global sensitivity analysis methods. Additionally, we propose a methodology for evaluating the efficacy of these methods by conducting a case study on MNIST digit dataset. Our study goes through the underlying mechanism of widely used GSA methods and highlights their efficacy through a comprehensive methodology.
Related papers
- Comprehensive Review and Empirical Evaluation of Causal Discovery Algorithms for Numerical Data [3.9523536371670045]
Causal analysis has become an essential component in understanding the underlying causes of phenomena across various fields.
Existing literature on causal discovery algorithms is fragmented, with inconsistent methodologies.
A lack of comprehensive evaluations, i.e., data characteristics are often ignored to be jointly analyzed when benchmarking algorithms.
arXiv Detail & Related papers (2024-07-17T23:47:05Z) - Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes [70.66864668709677]
We consider the problem of active learning for global sensitivity analysis of expensive black-box functions.
Since function evaluations are expensive, we use active learning to prioritize experimental resources where they yield the most value.
We propose novel active learning acquisition functions that directly target key quantities of derivative-based global sensitivity measures.
arXiv Detail & Related papers (2024-07-13T01:41:12Z) - Key Design Choices in Source-Free Unsupervised Domain Adaptation: An
In-depth Empirical Analysis [16.0130560365211]
This study provides a benchmark framework for Source-Free Unsupervised Domain Adaptation (SF-UDA) in image classification.
The study empirically examines a diverse set of SF-UDA techniques, assessing their consistency across datasets.
It exhaustively evaluates pre-training datasets and strategies, particularly focusing on both supervised and self-supervised methods.
arXiv Detail & Related papers (2024-02-25T13:37:36Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Making Machine Learning Datasets and Models FAIR for HPC: A Methodology
and Case Study [0.0]
The FAIR Guiding Principles aim to improve the findability, accessibility, interoperability, and reusability of digital content by making them both human and machine actionable.
These principles have not yet been broadly adopted in the domain of machine learning-based program analyses and optimizations for High-Performance Computing.
We design a methodology to make HPC datasets and machine learning models FAIR after investigating existing FAIRness assessment and improvement techniques.
arXiv Detail & Related papers (2022-11-03T18:45:46Z) - On Certifying and Improving Generalization to Unseen Domains [87.00662852876177]
Domain Generalization aims to learn models whose performance remains high on unseen domains encountered at test-time.
It is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets.
We propose a universal certification framework that can efficiently certify the worst-case performance of any DG method.
arXiv Detail & Related papers (2022-06-24T16:29:43Z) - Safe Exploration for Efficient Policy Evaluation and Comparison [20.97686379166058]
We study efficient and safe data collection for bandit policy evaluation.
For each variant, we analyze its statistical properties, derive the corresponding exploration policy, and design an efficient algorithm for computing it.
arXiv Detail & Related papers (2022-02-26T21:41:44Z) - MUC-driven Feature Importance Measurement and Adversarial Analysis for
Random Forest [1.5896078006029473]
We leverage formal methods and logical reasoning to develop a novel model-specific method for explaining the prediction of Random Forest (RF)
Our approach is centered around Minimal Unsatisfiable Cores (MUC) and provides a comprehensive solution for feature importance, covering local and global aspects, and adversarial sample analysis.
Our method can produce a user-centered report, which helps provide recommendations in real-life applications.
arXiv Detail & Related papers (2022-02-25T06:15:47Z) - Reinforcement Learning with Heterogeneous Data: Estimation and Inference [84.72174994749305]
We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity.
We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class.
We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset.
arXiv Detail & Related papers (2022-01-31T20:58:47Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.