Do ML Experts Discuss Explainability for AI Systems? A discussion case
in the industry for a domain-specific solution
- URL: http://arxiv.org/abs/2002.12450v1
- Date: Thu, 27 Feb 2020 21:23:27 GMT
- Title: Do ML Experts Discuss Explainability for AI Systems? A discussion case
in the industry for a domain-specific solution
- Authors: Juliana Jansen Ferreira and Mateus de Souza Monteiro
- Abstract summary: Domain specialists have an understanding of the data and how it can impact their decisions.
Without a deep understanding of the data, ML experts are not able to tune their models to get optimal results for a specific domain.
There are a lot of efforts to research AI explainability for different contexts, users and goals.
- Score: 3.190891983147147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of Artificial Intelligence (AI) tools in different domains
are becoming mandatory for all companies wishing to excel in their industries.
One major challenge for a successful application of AI is to combine the
machine learning (ML) expertise with the domain knowledge to have the best
results applying AI tools. Domain specialists have an understanding of the data
and how it can impact their decisions. ML experts have the ability to use
AI-based tools dealing with large amounts of data and generating insights for
domain experts. But without a deep understanding of the data, ML experts are
not able to tune their models to get optimal results for a specific domain.
Therefore, domain experts are key users for ML tools and the explainability of
those AI tools become an essential feature in that context. There are a lot of
efforts to research AI explainability for different contexts, users and goals.
In this position paper, we discuss interesting findings about how ML experts
can express concerns about AI explainability while defining features of an ML
tool to be developed for a specific domain. We analyze data from two brainstorm
sessions done to discuss the functionalities of an ML tool to support
geoscientists (domain experts) on analyzing seismic data (domain-specific data)
with ML resources.
Related papers
- Multi-Agent Actor-Critic Generative AI for Query Resolution and Analysis [1.0124625066746598]
We introduce MASQRAD, a transformative framework for query resolution based on the actor-critic model.
MASQRAD is excellent at translating imprecise or ambiguous user inquiries into precise and actionable requests.
MASQRAD functions as a sophisticated multi-agent system but "masquerades" to users as a single AI entity.
arXiv Detail & Related papers (2025-02-17T04:03:15Z) - AI Readiness in Healthcare through Storytelling XAI [0.5120567378386615]
We develop an approach that combines multi-task distillation with interpretability techniques to enable audience-centric explainability.
Our methods increase the trust of both the domain experts and the machine learning experts to enable a responsible AI.
arXiv Detail & Related papers (2024-10-24T13:30:18Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities
and Challenges [60.56413461109281]
Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes.
We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful.
We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions.
arXiv Detail & Related papers (2023-04-10T15:38:12Z) - OpenAGI: When LLM Meets Domain Experts [51.86179657467822]
Human Intelligence (HI) excels at combining basic skills to solve complex tasks.
This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents.
We introduce OpenAGI, an open-source platform designed for solving multi-step, real-world tasks.
arXiv Detail & Related papers (2023-04-10T03:55:35Z) - Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life [0.5115559623386964]
It is critical to have confidence in AI's trustworthiness in energy and engineering systems.
The use of explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics.
arXiv Detail & Related papers (2023-01-17T03:17:07Z) - Interpretability and accessibility of machine learning in selected food
processing, agriculture and health applications [0.0]
Lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms.
New techniques are emerging to improve ML accessibility through automated model design.
This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems.
arXiv Detail & Related papers (2022-11-30T02:44:13Z) - Measuring Ethics in AI with AI: A Methodology and Dataset Construction [1.6861004263551447]
We propose to use such newfound capabilities of AI technologies to augment our AI measuring capabilities.
We do so by training a model to classify publications related to ethical issues and concerns.
We highlight the implications of AI metrics, in particular their contribution towards developing trustful and fair AI-based tools and technologies.
arXiv Detail & Related papers (2021-07-26T00:26:12Z) - A Classification of Artificial Intelligence Systems for Mathematics
Education [3.718476964451589]
This chapter provides an overview of the different Artificial Intelligence (AI) systems that are being used in digital tools for Mathematics Education (ME)
It is aimed at researchers in AI and Machine Learning (ML), for whom we shed some light on the specific technologies that are being used in educational applications.
arXiv Detail & Related papers (2021-07-13T12:09:10Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.