Lessons Learned from EXMOS User Studies: A Technical Report Summarizing
Key Takeaways from User Studies Conducted to Evaluate The EXMOS Platform
- URL: http://arxiv.org/abs/2310.02063v2
- Date: Fri, 2 Feb 2024 11:52:22 GMT
- Title: Lessons Learned from EXMOS User Studies: A Technical Report Summarizing
Key Takeaways from User Studies Conducted to Evaluate The EXMOS Platform
- Authors: Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic,
Katrien Verbert
- Abstract summary: Two user studies aimed at illuminating the influence of different explanation types on three key dimensions: trust, understandability, and model improvement.
Results show that global model-centric explanations alone are insufficient for effectively guiding users during the intricate process of data configuration.
We present essential implications for developing interactive machine-learning systems driven by explanations.
- Score: 5.132827811038276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of interactive machine-learning systems, the provision of
explanations serves as a vital aid in the processes of debugging and enhancing
prediction models. However, the extent to which various global model-centric
and data-centric explanations can effectively assist domain experts in
detecting and resolving potential data-related issues for the purpose of model
improvement has remained largely unexplored. In this technical report, we
summarise the key findings of our two user studies. Our research involved a
comprehensive examination of the impact of global explanations rooted in both
data-centric and model-centric perspectives within systems designed to support
healthcare experts in optimising machine learning models through both automated
and manual data configurations. To empirically investigate these dynamics, we
conducted two user studies, comprising quantitative analysis involving a sample
size of 70 healthcare experts and qualitative assessments involving 30
healthcare experts. These studies were aimed at illuminating the influence of
different explanation types on three key dimensions: trust, understandability,
and model improvement. Results show that global model-centric explanations
alone are insufficient for effectively guiding users during the intricate
process of data configuration. In contrast, data-centric explanations exhibited
their potential by enhancing the understanding of system changes that occur
post-configuration. However, a combination of both showed the highest level of
efficacy for fostering trust, improving understandability, and facilitating
model enhancement among healthcare experts. We also present essential
implications for developing interactive machine-learning systems driven by
explanations. These insights can guide the creation of more effective systems
that empower domain experts to harness the full potential of machine learning
Related papers
- User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - Vital Insight: Assisting Experts' Sensemaking Process of Multi-modal Personal Tracking Data Using Visualization and LLM [25.264865296828116]
Vital Insight is an evidence-based'sensemaking' system that combines direct representation and indirect inference through visualization and Large Language Models.
We evaluate Vital Insight in user testing sessions with 14 experts in multi-modal tracking, synthesize design implications, and develop an expert sensemaking model where they iteratively move between direct data representations and AI-supported inferences to explore, retrieve, question, and validate insights.
arXiv Detail & Related papers (2024-10-18T21:56:35Z) - iNNspector: Visual, Interactive Deep Model Debugging [8.997568393450768]
We propose a conceptual framework structuring the data space of deep learning experiments.
Our framework captures design dimensions and proposes mechanisms to make this data explorable and tractable.
We present the iNNspector system, which enables tracking of deep learning experiments and provides interactive visualizations of the data.
arXiv Detail & Related papers (2024-07-25T12:48:41Z) - Self-Distilled Disentangled Learning for Counterfactual Prediction [49.84163147971955]
We propose the Self-Distilled Disentanglement framework, known as $SD2$.
Grounded in information theory, it ensures theoretically sound independent disentangled representations without intricate mutual information estimator designs.
Our experiments, conducted on both synthetic and real-world datasets, confirm the effectiveness of our approach.
arXiv Detail & Related papers (2024-06-09T16:58:19Z) - When Medical Imaging Met Self-Attention: A Love Story That Didn't Quite Work Out [8.113092414596679]
We extend two widely adopted convolutional architectures with different self-attention variants on two different medical datasets.
We observe no significant improvement in balanced accuracy over fully convolutional models.
We also find that important features, such as dermoscopic structures in skin lesion images, are still not learned by employing self-attention.
arXiv Detail & Related papers (2024-04-18T16:18:41Z) - EXMOS: Explanatory Model Steering Through Multifaceted Explanations and
Data Configurations [5.132827811038276]
This research investigates the influence of data-centric and model-centric explanations in interactive machine-learning systems.
We conducted studies with healthcare experts to explore the impact of different explanations on trust, understandability and model improvement.
Our results reveal the insufficiency of global model-centric explanations for guiding users during data configuration.
arXiv Detail & Related papers (2024-02-01T10:57:00Z) - Data-Centric Long-Tailed Image Recognition [49.90107582624604]
Long-tail models exhibit a strong demand for high-quality data.
Data-centric approaches aim to enhance both the quantity and quality of data to improve model performance.
There is currently a lack of research into the underlying mechanisms explaining the effectiveness of information augmentation.
arXiv Detail & Related papers (2023-11-03T06:34:37Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.