AHMoSe: A Knowledge-Based Visual Support System for Selecting Regression
Machine Learning Models
- URL: http://arxiv.org/abs/2101.11970v2
- Date: Tue, 30 Nov 2021 19:40:15 GMT
- Title: AHMoSe: A Knowledge-Based Visual Support System for Selecting Regression
Machine Learning Models
- Authors: Diego Rojo, Nyi Nyi Htun, Denis Parra, Robin De Croon and Katrien
Verbert
- Abstract summary: AHMoSe is a visual support system that allows domain experts to better understand, diagnose and compare different regression models.
We describe a use case scenario in the viticulture domain, grape quality prediction, where the system enables users to diagnose and select prediction models that perform better.
- Score: 2.9998889086656577
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Decision support systems have become increasingly popular in the domain of
agriculture. With the development of automated machine learning, agricultural
experts are now able to train, evaluate and make predictions using cutting edge
machine learning (ML) models without the need for much ML knowledge. Although
this automated approach has led to successful results in many scenarios, in
certain cases (e.g., when few labeled datasets are available) choosing among
different models with similar performance metrics is a difficult task.
Furthermore, these systems do not commonly allow users to incorporate their
domain knowledge that could facilitate the task of model selection, and to gain
insight into the prediction system for eventual decision making. To address
these issues, in this paper we present AHMoSe, a visual support system that
allows domain experts to better understand, diagnose and compare different
regression models, primarily by enriching model-agnostic explanations with
domain knowledge. To validate AHMoSe, we describe a use case scenario in the
viticulture domain, grape quality prediction, where the system enables users to
diagnose and select prediction models that perform better. We also discuss
feedback concerning the design of the tool from both ML and viticulture
experts.
Related papers
- Deciphering AutoML Ensembles: cattleia's Assistance in Decision-Making [0.0]
Cattleia is an application that deciphers the ensembles for regression, multiclass, and binary classification tasks.
It works with models built by three AutoML packages: auto-sklearn, AutoGluon, and FLAML.
arXiv Detail & Related papers (2024-03-19T11:56:21Z) - Towards MLOps: A DevOps Tools Recommender System for Machine Learning
System [1.065497990128313]
MLOps and machine learning systems evolve on new data unlike traditional systems on requirements.
In this paper, we present a framework for recommendation system that processes the contextual information.
Four different approaches i.e., rule-based, random forest, decision trees and k-nearest neighbors were investigated.
arXiv Detail & Related papers (2024-02-20T09:57:49Z) - Democratize with Care: The need for fairness specific features in
user-interface based open source AutoML tools [0.0]
Automated Machine Learning (AutoML) streamlines the machine learning model development process.
This democratization allows more users (including non-experts) to access and utilize state-of-the-art machine-learning expertise.
However, AutoML tools may also propagate bias in the way these tools handle the data, model choices, and optimization approaches adopted.
arXiv Detail & Related papers (2023-12-16T19:54:00Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - ComplAI: Theory of A Unified Framework for Multi-factor Assessment of
Black-Box Supervised Machine Learning Models [6.279863832853343]
ComplAI is a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior.
It evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective.
arXiv Detail & Related papers (2022-12-30T08:48:19Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - A Model-Driven Engineering Approach to Machine Learning and Software
Modeling [0.5156484100374059]
Models are used in both the Software Engineering (SE) and the Artificial Intelligence (AI) communities.
The main focus is on the Internet of Things (IoT) and smart Cyber-Physical Systems (CPS) use cases, where both ML and model-driven SE play a key role.
arXiv Detail & Related papers (2021-07-06T15:50:50Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.