Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time
for Interactive Data Systems
- URL: http://arxiv.org/abs/2009.01282v1
- Date: Wed, 2 Sep 2020 18:27:04 GMT
- Title: Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time
for Interactive Data Systems
- Authors: Jeremy E. Block, Eric D. Ragan
- Abstract summary: We discuss the evaluation of users' mental models of system logic.
Mental models are challenging to capture and analyze.
By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight.
- Score: 7.578368459974474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many interactive data systems combine visual representations of data with
embedded algorithmic support for automation and data exploration. To
effectively support transparent and explainable data systems, it is important
for researchers and designers to know how users understand the system. We
discuss the evaluation of users' mental models of system logic. Mental models
are challenging to capture and analyze. While common evaluation methods aim to
approximate the user's final mental model after a period of system usage, user
understanding continuously evolves as users interact with a system over time.
In this paper, we review many common mental model measurement techniques,
discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation
of mental models when using interactive data analysis and visualization
systems. We present guidelines for evaluating mental models over time that
reveal the evolution of specific model updates and how they may map to the
particular use of interface features and data queries. By asking users to
describe what they know and how they know it, researchers can collect
structured, time-ordered insight into a user's conceptualization process while
also helping guide users to their own discoveries.
Related papers
- Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Lessons Learned from EXMOS User Studies: A Technical Report Summarizing
Key Takeaways from User Studies Conducted to Evaluate The EXMOS Platform [5.132827811038276]
Two user studies aimed at illuminating the influence of different explanation types on three key dimensions: trust, understandability, and model improvement.
Results show that global model-centric explanations alone are insufficient for effectively guiding users during the intricate process of data configuration.
We present essential implications for developing interactive machine-learning systems driven by explanations.
arXiv Detail & Related papers (2023-10-03T14:04:45Z) - Understanding User Intent Modeling for Conversational Recommender
Systems: A Systematic Literature Review [1.3630870408844922]
We conducted a systematic literature review to gather data on models typically employed in designing conversational recommender systems.
We developed a decision model to assist researchers in selecting the most suitable models for their systems.
Our study contributes practical insights and a comprehensive understanding of user intent modeling, empowering the development of more effective and personalized conversational recommender systems.
arXiv Detail & Related papers (2023-08-05T22:50:21Z) - User Simulation for Evaluating Information Access Systems [38.48048183731099]
evaluating the effectiveness of interactive intelligent systems is a complex scientific challenge.
This book provides a thorough understanding of user simulation techniques designed specifically for evaluation.
It covers both general frameworks for designing user simulators, and specific models and algorithms for simulating user interactions with search engines, recommender systems, and conversational assistants.
arXiv Detail & Related papers (2023-06-14T14:54:06Z) - A User-Centered, Interactive, Human-in-the-Loop Topic Modelling System [32.065158970382036]
Human-in-the-loop topic modelling incorporates users' knowledge into the modelling process, enabling them to refine the model iteratively.
Recent research has demonstrated the value of user feedback, but there are still issues to consider.
We developed a novel, interactive human-in-the-loop topic modeling system with a user-friendly interface.
arXiv Detail & Related papers (2023-04-04T13:05:10Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - What is wrong with you?: Leveraging User Sentiment for Automatic Dialog
Evaluation [73.03318027164605]
We propose to use information that can be automatically extracted from the next user utterance as a proxy to measure the quality of the previous system response.
Our model generalizes across both spoken and written open-domain dialog corpora collected from real and paid users.
arXiv Detail & Related papers (2022-03-25T22:09:52Z) - How to Answer Why -- Evaluating the Explanations of AI Through Mental
Model Analysis [0.0]
Key question for human-centered AI research is how to validly survey users' mental models.
We evaluate whether mental models are suitable as an empirical research method.
We propose an exemplary method to evaluate explainable AI approaches in a human-centered way.
arXiv Detail & Related papers (2020-01-11T17:15:58Z) - A System for Real-Time Interactive Analysis of Deep Learning Training [66.06880335222529]
Currently available systems are limited to monitoring only the logged data that must be specified before the training process starts.
We present a new system that enables users to perform interactive queries on live processes generating real-time information.
arXiv Detail & Related papers (2020-01-05T11:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.