EXMOS: Explanatory Model Steering Through Multifaceted Explanations and
Data Configurations
- URL: http://arxiv.org/abs/2402.00491v1
- Date: Thu, 1 Feb 2024 10:57:00 GMT
- Title: EXMOS: Explanatory Model Steering Through Multifaceted Explanations and
Data Configurations
- Authors: Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic,
Katrien Verbert
- Abstract summary: This research investigates the influence of data-centric and model-centric explanations in interactive machine-learning systems.
We conducted studies with healthcare experts to explore the impact of different explanations on trust, understandability and model improvement.
Our results reveal the insufficiency of global model-centric explanations for guiding users during data configuration.
- Score: 5.132827811038276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explanations in interactive machine-learning systems facilitate debugging and
improving prediction models. However, the effectiveness of various global
model-centric and data-centric explanations in aiding domain experts to detect
and resolve potential data issues for model improvement remains unexplored.
This research investigates the influence of data-centric and model-centric
global explanations in systems that support healthcare experts in optimising
models through automated and manual data configurations. We conducted
quantitative (n=70) and qualitative (n=30) studies with healthcare experts to
explore the impact of different explanations on trust, understandability and
model improvement. Our results reveal the insufficiency of global model-centric
explanations for guiding users during data configuration. Although data-centric
explanations enhanced understanding of post-configuration system changes, a
hybrid fusion of both explanation types demonstrated the highest effectiveness.
Based on our study results, we also present design implications for effective
explanation-driven interactive machine-learning systems.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - iNNspector: Visual, Interactive Deep Model Debugging [8.997568393450768]
We propose a conceptual framework structuring the data space of deep learning experiments.
Our framework captures design dimensions and proposes mechanisms to make this data explorable and tractable.
We present the iNNspector system, which enables tracking of deep learning experiments and provides interactive visualizations of the data.
arXiv Detail & Related papers (2024-07-25T12:48:41Z) - On the Robustness of Global Feature Effect Explanations [17.299418894910627]
Effects of predictor features in black-box supervised learning are an essential diagnostic tool for model and scientific discovery in applied sciences.
We introduce several theoretical bounds for evaluating the robustness of partial dependence plots and accumulated local effects.
arXiv Detail & Related papers (2024-06-13T12:54:53Z) - Enhancing Dynamical System Modeling through Interpretable Machine
Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition [0.8796261172196743]
We introduce a comprehensive data-driven framework aimed at enhancing the modeling of physical systems.
As a demonstrative application, we pursue the modeling of cathodic electrophoretic deposition (EPD), commonly known as e-coating.
arXiv Detail & Related papers (2024-01-16T14:58:21Z) - Better, Not Just More: Data-Centric Machine Learning for Earth Observation [16.729827218159038]
We argue that a shift from a model-centric view to a complementary data-centric perspective is necessary for further improvements in accuracy, generalization ability, and real impact on end-user applications.
This work presents a definition as well as a precise categorization and overview of automated data-centric learning approaches for geospatial data.
arXiv Detail & Related papers (2023-12-08T19:24:05Z) - Data-Centric Long-Tailed Image Recognition [49.90107582624604]
Long-tail models exhibit a strong demand for high-quality data.
Data-centric approaches aim to enhance both the quantity and quality of data to improve model performance.
There is currently a lack of research into the underlying mechanisms explaining the effectiveness of information augmentation.
arXiv Detail & Related papers (2023-11-03T06:34:37Z) - Lessons Learned from EXMOS User Studies: A Technical Report Summarizing
Key Takeaways from User Studies Conducted to Evaluate The EXMOS Platform [5.132827811038276]
Two user studies aimed at illuminating the influence of different explanation types on three key dimensions: trust, understandability, and model improvement.
Results show that global model-centric explanations alone are insufficient for effectively guiding users during the intricate process of data configuration.
We present essential implications for developing interactive machine-learning systems driven by explanations.
arXiv Detail & Related papers (2023-10-03T14:04:45Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - Striving for data-model efficiency: Identifying data externalities on
group performance [75.17591306911015]
Building trustworthy, effective, and responsible machine learning systems hinges on understanding how differences in training data and modeling decisions interact to impact predictive performance.
We focus on a particular type of data-model inefficiency, in which adding training data from some sources can actually lower performance evaluated on key sub-groups of the population.
Our results indicate that data-efficiency is a key component of both accurate and trustworthy machine learning.
arXiv Detail & Related papers (2022-11-11T16:48:27Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.