Explainable Artificial Intelligence techniques for interpretation of food datasets: a review
- URL: http://arxiv.org/abs/2504.10527v1
- Date: Sat, 12 Apr 2025 11:10:43 GMT
- Title: Explainable Artificial Intelligence techniques for interpretation of food datasets: a review
- Authors: Leonardo Arrighi, Ingrid Alves de Moraes, Marco Zullich, Michele Simonato, Douglas Fernandes Barbin, Sylvio Barbon Junior,
- Abstract summary: XAI remains underutilized in Food Engineering, limiting model reliability.<n>XAI techniques such as SHAP and Grad-CAM can pinpoint which spectral wavelengths or image regions contribute most to a prediction.<n>This survey presents a taxonomy for classifying food quality research using XAI techniques.
- Score: 1.593353228010434
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Artificial Intelligence (AI) has become essential for analyzing complex data and solving highly-challenging tasks. It is being applied across numerous disciplines beyond computer science, including Food Engineering, where there is a growing demand for accurate and trustworthy predictions to meet stringent food quality standards. However, this requires increasingly complex AI models, raising reliability concerns. In response, eXplainable AI (XAI) has emerged to provide insights into AI decision-making, aiding model interpretation by developers and users. Nevertheless, XAI remains underutilized in Food Engineering, limiting model reliability. For instance, in food quality control, AI models using spectral imaging can detect contaminants or assess freshness levels, but their opaque decision-making process hinders adoption. XAI techniques such as SHAP (Shapley Additive Explanations) and Grad-CAM (Gradient-weighted Class Activation Mapping) can pinpoint which spectral wavelengths or image regions contribute most to a prediction, enhancing transparency and aiding quality control inspectors in verifying AI-generated assessments. This survey presents a taxonomy for classifying food quality research using XAI techniques, organized by data types and explanation methods, to guide researchers in choosing suitable approaches. We also highlight trends, challenges, and opportunities to encourage the adoption of XAI in Food Engineering.
Related papers
- Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.
We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.
Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas [0.07499722271664146]
The demand for eXplainable AI (XAI) has increased to enhance the interpretability, transparency, and trustworthiness of AI models.
We propose a framework that integrates quantitative benchmarking with qualitative user assessments through virtual personas.
This yields an estimated XAI score and provides tailored recommendations for both the optimal AI model and the XAI method for a given scenario.
arXiv Detail & Related papers (2025-03-06T09:44:18Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Generative Artificial Intelligence Meets Synthetic Aperture Radar: A Survey [49.29751866761522]
This paper aims to investigate the intersection of GenAI and SAR.
First, we illustrate the common data generation-based applications in SAR field.
Then, an overview of the latest GenAI models is systematically reviewed.
Finally, the corresponding applications in SAR domain are also included.
arXiv Detail & Related papers (2024-11-05T03:06:00Z) - Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction [5.417632175667161]
Explainable Artificial Intelligence (XAI) addresses challenges by providing explanations for how these models make decisions and predictions.<n>Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques.<n>This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas.
arXiv Detail & Related papers (2024-08-30T21:42:17Z) - XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - Progressing from Anomaly Detection to Automated Log Labeling and
Pioneering Root Cause Analysis [53.24804865821692]
This study introduces a taxonomy for log anomalies and explores automated data labeling to mitigate labeling challenges.
The study envisions a future where root cause analysis follows anomaly detection, unraveling the underlying triggers of anomalies.
arXiv Detail & Related papers (2023-12-22T15:04:20Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.<n>It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.<n>We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.