Truthful Meta-Explanations for Local Interpretability of Machine
Learning Models
- URL: http://arxiv.org/abs/2212.03513v1
- Date: Wed, 7 Dec 2022 08:32:04 GMT
- Title: Truthful Meta-Explanations for Local Interpretability of Machine
Learning Models
- Authors: Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas
- Abstract summary: We present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric.
We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.
- Score: 10.342433824178825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated Machine Learning-based systems' integration into a wide range of
tasks has expanded as a result of their performance and speed. Although there
are numerous advantages to employing ML-based systems, if they are not
interpretable, they should not be used in critical, high-risk applications
where human lives are at risk. To address this issue, researchers and
businesses have been focusing on finding ways to improve the interpretability
of complex ML systems, and several such methods have been developed. Indeed,
there are so many developed techniques that it is difficult for practitioners
to choose the best among them for their applications, even when using
evaluation metrics. As a result, the demand for a selection tool, a
meta-explanation technique based on a high-quality evaluation metric, is
apparent. In this paper, we present a local meta-explanation technique which
builds on top of the truthfulness metric, which is a faithfulness-based metric.
We demonstrate the effectiveness of both the technique and the metric by
concretely defining all the concepts and through experimentation.
Related papers
- Towards Trustworthy Machine Learning in Production: An Overview of the Robustness in MLOps Approach [0.0]
In recent years, AI researchers and practitioners have introduced principles and guidelines to build systems that make reliable and trustworthy decisions.
In practice, a fundamental challenge arises when the system needs to be operationalized and deployed to evolve and operate in real-life environments continuously.
To address this challenge, Machine Learning Operations (MLOps) have emerged as a potential recipe for standardizing ML solutions in deployment.
arXiv Detail & Related papers (2024-10-28T09:34:08Z) - A Survey of Small Language Models [104.80308007044634]
Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources.
We present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques.
arXiv Detail & Related papers (2024-10-25T23:52:28Z) - Causal Inference Tools for a Better Evaluation of Machine Learning [0.0]
We introduce key statistical methods such as Ordinary Least Squares (OLS) regression, Analysis of Variance (ANOVA) and logistic regression.
The document serves as a guide for researchers and practitioners, detailing how these techniques can provide deeper insights into model behavior, performance, and fairness.
arXiv Detail & Related papers (2024-10-02T10:03:29Z) - Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - Between Randomness and Arbitrariness: Some Lessons for Reliable Machine Learning at Scale [2.50194939587674]
dissertation: quantifying and mitigating sources of arbitiness in ML, randomness in uncertainty estimation and optimization algorithms, in order to achieve scalability without sacrificing reliability.
dissertation serves as an empirical proof by example that research on reliable measurement for machine learning is intimately bound up with research in law and policy.
arXiv Detail & Related papers (2024-06-13T19:29:37Z) - Classification Performance Metric Elicitation and its Applications [5.5637552942511155]
Despite its practical interest, there is limited formal guidance on how to select metrics for machine learning applications.
This thesis outlines metric elicitation as a principled framework for selecting the performance metric that best reflects implicit user preferences.
arXiv Detail & Related papers (2022-08-19T03:57:17Z) - The Benchmark Lottery [114.43978017484893]
"A benchmark lottery" describes the overall fragility of the machine learning benchmarking process.
We show that the relative performance of algorithms may be altered significantly simply by choosing different benchmark tasks.
arXiv Detail & Related papers (2021-07-14T21:08:30Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.