Interpretability and accessibility of machine learning in selected food
processing, agriculture and health applications
- URL: http://arxiv.org/abs/2211.16699v1
- Date: Wed, 30 Nov 2022 02:44:13 GMT
- Title: Interpretability and accessibility of machine learning in selected food
processing, agriculture and health applications
- Authors: N. Ranasinghe, A. Ramanan, S. Fernando, P. N. Hameed, D. Herath, T.
Malepathirana, P. Suganthan, M. Niranjan and S. Halgamuge
- Abstract summary: Lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms.
New techniques are emerging to improve ML accessibility through automated model design.
This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence (AI) and its data-centric branch of machine learning
(ML) have greatly evolved over the last few decades. However, as AI is used
increasingly in real world use cases, the importance of the interpretability of
and accessibility to AI systems have become major research areas. The lack of
interpretability of ML based systems is a major hindrance to widespread
adoption of these powerful algorithms. This is due to many reasons including
ethical and regulatory concerns, which have resulted in poorer adoption of ML
in some areas. The recent past has seen a surge in research on interpretable
ML. Generally, designing a ML system requires good domain understanding
combined with expert knowledge. New techniques are emerging to improve ML
accessibility through automated model design. This paper provides a review of
the work done to improve interpretability and accessibility of machine learning
in the context of global problems while also being relevant to developing
countries. We review work under multiple levels of interpretability including
scientific and mathematical interpretation, statistical interpretation and
partial semantic interpretation. This review includes applications in three
areas, namely food processing, agriculture and health.
Related papers
- Understanding the Complexity and Its Impact on Testing in ML-Enabled
Systems [8.630445165405606]
We study Rasa 3.0, an industrial dialogue system that has been widely adopted by various companies around the world.
Our goal is to characterize the complexity of such a largescale ML-enabled system and to understand the impact of the complexity on testing.
Our study reveals practical implications for software engineering for ML-enabled systems.
arXiv Detail & Related papers (2023-01-10T08:13:24Z) - Vision Paper: Causal Inference for Interpretable and Robust Machine
Learning in Mobility Analysis [71.2468615993246]
Building intelligent transportation systems requires an intricate combination of artificial intelligence and mobility analysis.
The past few years have seen rapid development in transportation applications using advanced deep neural networks.
This vision paper emphasizes research challenges in deep learning-based mobility analysis that require interpretability and robustness.
arXiv Detail & Related papers (2022-10-18T17:28:58Z) - One-way Explainability Isn't The Message [2.618757282404254]
We argue that requirements on both human and machine in this context are significantly different.
The design of such human-machine systems should be driven by repeated, two-way intelligibility of information.
We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.
arXiv Detail & Related papers (2022-05-05T09:15:53Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Pitfalls of Explainable ML: An Industry Perspective [29.49574255183219]
Explanations sit at the core of desirable attributes of a machine learning (ML) system.
The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders.
arXiv Detail & Related papers (2021-06-14T21:05:05Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - A Large-Scale, Automated Study of Language Surrounding Artificial
Intelligence [0.0]
This work presents a large-scale analysis of artificial intelligence (AI) and machine learning (ML) references within news articles and scientific publications between 2011 and 2019.
We implement word association measurements that automatically identify shifts in language co-occurring with AI/ML and quantify the strength of these word associations.
arXiv Detail & Related papers (2021-02-24T19:14:53Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.