OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence
for Digital Agriculture
- URL: http://arxiv.org/abs/2209.15104v1
- Date: Thu, 29 Sep 2022 21:20:25 GMT
- Title: OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence
for Digital Agriculture
- Authors: Quoc Hung Ngo, Tahar Kechadi, Nhien-An Le-Khac
- Abstract summary: We build an Agriculture Computing Ontology (AgriComO) to explain the knowledge mined in agriculture.
XAI tries to provide human-understandable explanations for decision-making and trained AI models.
- Score: 4.286327408435937
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent machine learning approaches have been effective in Artificial
Intelligence (AI) applications. They produce robust results with a high level
of accuracy. However, most of these techniques do not provide
human-understandable explanations for supporting their results and decisions.
They usually act as black boxes, and it is not easy to understand how decisions
have been made. Explainable Artificial Intelligence (XAI), which has received
much interest recently, tries to provide human-understandable explanations for
decision-making and trained AI models. For instance, in digital agriculture,
related domains often present peculiar or input features with no link to
background knowledge. The application of the data mining process on
agricultural data leads to results (knowledge), which are difficult to explain.
In this paper, we propose a knowledge map model and an ontology design as an
XAI framework (OAK4XAI) to deal with this issue. The framework does not only
consider the data analysis part of the process, but it takes into account the
semantics aspect of the domain knowledge via an ontology and a knowledge map
model, provided as modules of the framework. Many ongoing XAI studies aim to
provide accurate and verbalizable accounts for how given feature values
contribute to model decisions. The proposed approach, however, focuses on
providing consistent information and definitions of concepts, algorithms, and
values involved in the data mining models. We built an Agriculture Computing
Ontology (AgriComO) to explain the knowledge mined in agriculture. AgriComO has
a well-designed structure and includes a wide range of concepts and
transformations suitable for agriculture and computing domains.
Related papers
- Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction [5.417632175667161]
Explainable Artificial Intelligence (XAI) addresses challenges by providing explanations for how these models make decisions and predictions.
Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques.
This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas.
arXiv Detail & Related papers (2024-08-30T21:42:17Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Explainable AI for Bioinformatics: Methods, Tools, and Applications [1.6855835471222005]
Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models.
In this paper, we discuss the importance of explainability with a focus on bioinformatics.
arXiv Detail & Related papers (2022-12-25T21:00:36Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Explainable Artificial Intelligence (XAI) for Internet of Things: A
Survey [1.7205106391379026]
Black-box nature of Artificial Intelligence (AI) models do not allow users to comprehend and sometimes trust the output created by such model.
In AI applications, where not only the results but also the decision paths to the results are critical, such black-box AI models are not sufficient.
Explainable Artificial Intelligence (XAI) addresses this problem and defines a set of AI models that are interpretable by the users.
arXiv Detail & Related papers (2022-06-07T08:22:30Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Principles and Practice of Explainable Machine Learning [12.47276164048813]
This report focuses on data-driven methods -- machine learning (ML) and pattern recognition models in particular.
With the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models.
We have undertaken a survey to help industry practitioners understand the field of explainable machine learning better.
arXiv Detail & Related papers (2020-09-18T14:50:27Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.