SalienTrack: providing salient information for semi-automated
self-tracking feedback with model explanations
- URL: http://arxiv.org/abs/2109.10231v1
- Date: Tue, 21 Sep 2021 14:53:47 GMT
- Title: SalienTrack: providing salient information for semi-automated
self-tracking feedback with model explanations
- Authors: Yunlong Wang, Jiaying Liu, Homin Park, Jordan Schultz-McArdle,
Stephanie Rosenthal, Brian Y Lim
- Abstract summary: We propose a Self-Tracking Feedback Saliency Framework to define when to provide feedback and how to present it.
We train a machine learning model to predict whether a user would learn from each tracked event.
We discuss implications for learnability in self-tracking, and how adding model explainability expands opportunities for improving feedback experience.
- Score: 31.87742708229638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-tracking can improve people's awareness of their unhealthy behaviors to
provide insights towards behavior change. Prior work has explored how
self-trackers reflect on their logged data, but it remains unclear how much
they learn from the tracking feedback, and which information is more useful.
Indeed, the feedback can still be overwhelming, and making it concise can
improve learning by increasing focus and reducing interpretation burden. We
conducted a field study of mobile food logging with two feedback modes (manual
journaling and automatic annotation of food images) and identified learning
differences regarding nutrition, assessment, behavioral, and contextual
information. We propose a Self-Tracking Feedback Saliency Framework to define
when to provide feedback, on which specific information, why those details, and
how to present them (as manual inquiry or automatic feedback). We propose
SalienTrack to implement these requirements. Using the data collected from the
user study, we trained a machine learning model to predict whether a user would
learn from each tracked event. Using explainable AI (XAI) techniques, we
identified the most salient features per instance and why they lead to positive
learning outcomes. We discuss implications for learnability in self-tracking,
and how adding model explainability expands opportunities for improving
feedback experience.
Related papers
- Effects of Multimodal Explanations for Autonomous Driving on Driving Performance, Cognitive Load, Expertise, Confidence, and Trust [2.9143343479274675]
We tested the impact of an AI Coach's explanatory communications modeled after performance driving expert instructions.
Results show AI coaching can effectively teach performance driving skills to novices.
We suggest efficient, modality-appropriate explanations should be opted for when designing effective HMI communications.
arXiv Detail & Related papers (2024-01-08T19:33:57Z) - Learning by Self-Explaining [23.420673675343266]
We introduce a novel workflow in the context of image classification, termed Learning by Self-Explaining (LSX)
LSX utilizes aspects of self-refining AI and human-guided explanatory machine learning.
Our results indicate improvements via Learning by Self-Explaining on several levels.
arXiv Detail & Related papers (2023-09-15T13:41:57Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Learning Transferable Pedestrian Representation from Multimodal
Information Supervision [174.5150760804929]
VAL-PAT is a novel framework that learns transferable representations to enhance various pedestrian analysis tasks with multimodal information.
We first perform pre-training on LUPerson-TA dataset, where each image contains text and attribute annotations.
We then transfer the learned representations to various downstream tasks, including person reID, person attribute recognition and text-based person search.
arXiv Detail & Related papers (2023-04-12T01:20:58Z) - Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - Learning Reward Functions from Scale Feedback [11.941038991430837]
A common framework is to iteratively query the user about which of two presented robot trajectories they prefer.
We propose scale feedback, where the user utilizes a slider to give more nuanced information.
We demonstrate the performance benefit of slider feedback in simulations, and validate our approach in two user studies.
arXiv Detail & Related papers (2021-10-01T09:45:18Z) - Crop-Transform-Paste: Self-Supervised Learning for Visual Tracking [137.26381337333552]
In this work, we develop the Crop-Transform-Paste operation, which is able to synthesize sufficient training data.
Since the object state is known in all synthesized data, existing deep trackers can be trained in routine ways without human annotation.
arXiv Detail & Related papers (2021-06-21T07:40:34Z) - How Useful is Self-Supervised Pretraining for Visual Tasks? [133.1984299177874]
We evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks.
Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows.
arXiv Detail & Related papers (2020-03-31T16:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.