Perspectives on Incorporating Expert Feedback into Model Updates
- URL: http://arxiv.org/abs/2205.06905v1
- Date: Fri, 13 May 2022 21:46:55 GMT
- Title: Perspectives on Incorporating Expert Feedback into Model Updates
- Authors: Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, Ameet
Talwalkar
- Abstract summary: We devise a taxonomy to match expert feedback types with practitioner updates.
A practitioner may receive feedback from an expert at the observation- or domain-level.
We review existing work from ML and human-computer interaction to describe this feedback-update taxonomy.
- Score: 46.99664744930785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) practitioners are increasingly tasked with developing
models that are aligned with non-technical experts' values and goals. However,
there has been insufficient consideration on how practitioners should translate
domain expertise into ML updates. In this paper, we consider how to capture
interactions between practitioners and experts systematically. We devise a
taxonomy to match expert feedback types with practitioner updates. A
practitioner may receive feedback from an expert at the observation- or
domain-level, and convert this feedback into updates to the dataset, loss
function, or parameter space. We review existing work from ML and
human-computer interaction to describe this feedback-update taxonomy, and
highlight the insufficient consideration given to incorporating feedback from
non-technical experts. We end with a set of open questions that naturally arise
from our proposed taxonomy and subsequent survey.
Related papers
- The Future of Open Human Feedback [65.2188596695235]
We bring together interdisciplinary experts to assess the opportunities and challenges to realizing an open ecosystem of human feedback for AI.
We first look for successful practices in peer production, open source, and citizen science communities.
We end by envisioning the components needed to underpin a sustainable and open human feedback ecosystem.
arXiv Detail & Related papers (2024-08-15T17:59:14Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors [43.42054421125617]
Existing mechanisms of providing feedback largely rely on human supervision.
Our work aims to leverage large language models to provide contextualized and multi-level feedback to empower peer counselors.
arXiv Detail & Related papers (2024-03-21T04:23:56Z) - Eliciting Model Steering Interactions from Users via Data and Visual
Design Probes [8.45602005745865]
Domain experts increasingly use automated data science tools to incorporate machine learning (ML) models in their work but struggle to " codify" these models when they are incorrect.
For these experts, semantic interactions can provide an accessible avenue to guide and refine ML models without having to dive into its technical details.
This study examines how experts with a spectrum of ML expertise use semantic interactions to update a simple classification model.
arXiv Detail & Related papers (2023-10-12T20:34:02Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Interpretability, Then What? Editing Machine Learning Models to Reflect
Human Knowledge and Values [27.333641578187887]
We develop GAM Changer, the first interactive system to help data scientists and domain experts edit Generalized Additive Models (GAMs)
With novel interaction techniques, our tool puts interpretability into action--empowering users to analyze, validate, and align model behaviors with their knowledge and values.
arXiv Detail & Related papers (2022-06-30T17:57:12Z) - An Exploratory Analysis of Feedback Types Used in Online Coding
Exercises [0.0]
This research aims at the identification of feedback types applied by CodingBat, Scratch and Blockly.
The study revealed difficulties in identifying clear-cut boundaries between feedback types.
arXiv Detail & Related papers (2022-06-07T07:52:17Z) - Are Metrics Enough? Guidelines for Communicating and Visualizing
Predictive Models to Subject Matter Experts [7.768301998812552]
We describe an iterative study conducted with both subject matter experts and data scientists to understand the gaps in communication.
We derive a set of communication guidelines that use visualization as a common medium for communicating the strengths and weaknesses of a model.
arXiv Detail & Related papers (2022-05-11T19:40:24Z) - The Need for Interpretable Features: Motivation and Taxonomy [69.07189753428553]
We claim that the term "interpretable feature" is not specific nor detailed enough to capture the full extent to which features impact the usefulness of machine learning explanations.
In this paper, we motivate and discuss three key lessons: 1) more attention should be given to what we refer to as the interpretable feature space, or the state of features that are useful to domain experts taking real-world actions.
arXiv Detail & Related papers (2022-02-23T19:19:14Z) - Expertise Style Transfer: A New Task Towards Better Communication
between Experts and Laymen [88.30492014778943]
We propose a new task of expertise style transfer and contribute a manually annotated dataset.
Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions.
We establish the benchmark performance of five state-of-the-art models for style transfer and text simplification.
arXiv Detail & Related papers (2020-05-02T04:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.