Pitfalls of Explainable ML: An Industry Perspective
- URL: http://arxiv.org/abs/2106.07758v1
- Date: Mon, 14 Jun 2021 21:05:05 GMT
- Title: Pitfalls of Explainable ML: An Industry Perspective
- Authors: Sahil Verma, Aditya Lahiri, John P. Dickerson, Su-In Lee
- Abstract summary: Explanations sit at the core of desirable attributes of a machine learning (ML) system.
The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders.
- Score: 29.49574255183219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning (ML) systems take a more prominent and central role in
contributing to life-impacting decisions, ensuring their trustworthiness and
accountability is of utmost importance. Explanations sit at the core of these
desirable attributes of a ML system. The emerging field is frequently called
``Explainable AI (XAI)'' or ``Explainable ML.'' The goal of explainable ML is
to intuitively explain the predictions of a ML system, while adhering to the
needs to various stakeholders. Many explanation techniques were developed with
contributions from both academia and industry. However, there are several
existing challenges that have not garnered enough interest and serve as
roadblocks to widespread adoption of explainable ML. In this short paper, we
enumerate challenges in explainable ML from an industry perspective. We hope
these challenges will serve as promising future research directions, and would
contribute to democratizing explainable ML.
Related papers
- Efficient Multimodal Large Language Models: A Survey [60.7614299984182]
Multimodal Large Language Models (MLLMs) have demonstrated remarkable performance in tasks such as visual question answering, visual understanding and reasoning.
The extensive model size and high training and inference costs have hindered the widespread application of MLLMs in academia and industry.
This survey provides a comprehensive and systematic review of the current state of efficient MLLMs.
arXiv Detail & Related papers (2024-05-17T12:37:10Z) - Exploring Perceptual Limitation of Multimodal Large Language Models [57.567868157293994]
We quantitatively study the perception of small visual objects in several state-of-the-art MLLMs.
We identify four independent factors that can contribute to this limitation.
Lower object quality and smaller object size can both independently reduce MLLMs' ability to answer visual questions.
arXiv Detail & Related papers (2024-02-12T03:04:42Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - MLCopilot: Unleashing the Power of Large Language Models in Solving
Machine Learning Tasks [31.733088105662876]
We aim to bridge the gap between machine intelligence and human knowledge by introducing a novel framework.
We showcase the possibility of extending the capability of LLMs to comprehend structured inputs and perform thorough reasoning for solving novel ML tasks.
arXiv Detail & Related papers (2023-04-28T17:03:57Z) - Interpretability and accessibility of machine learning in selected food
processing, agriculture and health applications [0.0]
Lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms.
New techniques are emerging to improve ML accessibility through automated model design.
This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems.
arXiv Detail & Related papers (2022-11-30T02:44:13Z) - The Role of Machine Learning in Cybersecurity [1.6932802756478726]
Deployment of Machine Learning in cybersecurity is still at an early stage, revealing a significant discrepancy between research and practice.
This paper is the first attempt to provide a holistic understanding of the role of ML in the entire cybersecurity domain.
We highlight the advantages of ML with respect to human-driven detection methods, as well as the additional tasks that can be addressed by ML in cybersecurity.
arXiv Detail & Related papers (2022-06-20T10:56:08Z) - Does Explainable Machine Learning Uncover the Black Box in Vision
Applications? [1.0660480034605242]
We argue that the current philosophy behind explainable ML suffers from certain limitations.
We also provide perspectives on how explainablity in ML can benefit by relying on more rigorous principles.
arXiv Detail & Related papers (2021-12-18T10:37:52Z) - Declarative Machine Learning Systems [7.5717114708721045]
Machine learning (ML) has moved from a academic endeavor to a pervasive technology adopted in almost every aspect of computing.
Recent successes in applying ML in natural sciences revealed that ML can be used to tackle some of the hardest real-world problems humanity faces today.
We believe the next wave of ML systems will allow a larger amount of people, potentially without coding skills, to perform the same tasks.
arXiv Detail & Related papers (2021-07-16T23:57:57Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.