(Re)Defining Expertise in Machine Learning Development
- URL: http://arxiv.org/abs/2302.04337v1
- Date: Wed, 8 Feb 2023 21:10:20 GMT
- Title: (Re)Defining Expertise in Machine Learning Development
- Authors: Mark D\'iaz, Angela D. R. Smith
- Abstract summary: We conduct a systematic literature review of machine learning research to understand 1) the bases on which expertise is defined and recognized and 2) the roles experts play in ML development.
Our goal is to produce a high-level taxonomy to highlight limits and opportunities in how experts are identified and engaged in ML research.
- Score: 3.096615629099617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain experts are often engaged in the development of machine learning
systems in a variety of ways, such as in data collection and evaluation of
system performance. At the same time, who counts as an 'expert' and what
constitutes 'expertise' is not always explicitly defined. In this project, we
conduct a systematic literature review of machine learning research to
understand 1) the bases on which expertise is defined and recognized and 2) the
roles experts play in ML development. Our goal is to produce a high-level
taxonomy to highlight limits and opportunities in how experts are identified
and engaged in ML research.
Related papers
- Expert-Agnostic Learning to Defer [4.171294900540735]
We introduce EA-L2D: Expert-Agnostic Learning to Defer, a novel L2D framework that leverages a Bayesian approach to model expert behaviour.
We observe performance gains over the next state-of-the-art of 1-16% for seen experts and 4-28% for unseen experts in settings with high expert diversity.
arXiv Detail & Related papers (2025-02-14T19:59:25Z) - PIKE-RAG: sPecIalized KnowledgE and Rationale Augmented Generation [16.081923602156337]
We introduce sPecIalized KnowledgE and Rationale Augmentation Generation (PIKE-RAG)
We focus on extracting, understanding, and applying specialized knowledge, while constructing coherent rationale to incrementally steer LLMs toward accurate responses.
This strategic approach offers a roadmap for the phased development and enhancement of RAG systems, tailored to meet the evolving demands of industrial applications.
arXiv Detail & Related papers (2025-01-20T15:39:39Z) - What Makes An Expert? Reviewing How ML Researchers Define "Expert" [4.6346970187885885]
We review 112 academic publications that explicitly reference 'expert' and 'expertise'
We find that expertise is often undefined and forms of knowledge outside of formal education are rarely sought.
We discuss the ways experts are engaged in ML development in relation to deskilling, the social construction of expertise, and implications for responsible AI development.
arXiv Detail & Related papers (2024-10-31T19:51:28Z) - Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs [64.9693406713216]
Internal mechanisms that contribute to the effectiveness of RAG systems remain underexplored.
Our experiments reveal that several core groups of experts are primarily responsible for RAG-related behaviors.
We propose several strategies to enhance RAG's efficiency and effectiveness through expert activation.
arXiv Detail & Related papers (2024-10-20T16:08:54Z) - The FIX Benchmark: Extracting Features Interpretable to eXperts [9.688218822056823]
We present FIX (Features Interpretable to eXperts), a benchmark for measuring how well a collection of features aligns with expert knowledge.
In collaboration with domain experts, we propose FIXScore, a unified expert alignment measure applicable to diverse real-world settings.
We find that popular feature-based explanation methods have poor alignment with expert-specified knowledge.
arXiv Detail & Related papers (2024-09-20T17:53:03Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Exploring the Cognitive Knowledge Structure of Large Language Models: An
Educational Diagnostic Assessment Approach [50.125704610228254]
Large Language Models (LLMs) have not only exhibited exceptional performance across various tasks, but also demonstrated sparks of intelligence.
Recent studies have focused on assessing their capabilities on human exams and revealed their impressive competence in different domains.
We conduct an evaluation using MoocRadar, a meticulously annotated human test dataset based on Bloom taxonomy.
arXiv Detail & Related papers (2023-10-12T09:55:45Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - The Expertise Level [0.0]
This paper examines the nature of expertise and presents an abstract knowledge-level and skill-level description of expertise.
A new level lying above the Knowledge Level, called the Expertise Level, is introduced to describe the skills of an expert without having to worry about details of the knowledge required.
The Model of Expertise is introduced combining the knowledge-level and expertise-level descriptions.
arXiv Detail & Related papers (2022-11-11T20:55:11Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.