What Makes An Expert? Reviewing How ML Researchers Define "Expert"
- URL: http://arxiv.org/abs/2411.00179v1
- Date: Thu, 31 Oct 2024 19:51:28 GMT
- Title: What Makes An Expert? Reviewing How ML Researchers Define "Expert"
- Authors: Mark Díaz, Angela DR Smith,
- Abstract summary: We review 112 academic publications that explicitly reference 'expert' and 'expertise'
We find that expertise is often undefined and forms of knowledge outside of formal education are rarely sought.
We discuss the ways experts are engaged in ML development in relation to deskilling, the social construction of expertise, and implications for responsible AI development.
- Score: 4.6346970187885885
- License:
- Abstract: Human experts are often engaged in the development of machine learning systems to collect and validate data, consult on algorithm development, and evaluate system performance. At the same time, who counts as an 'expert' and what constitutes 'expertise' is not always explicitly defined. In this work, we review 112 academic publications that explicitly reference 'expert' and 'expertise' and that describe the development of machine learning (ML) systems to survey how expertise is characterized and the role experts play. We find that expertise is often undefined and forms of knowledge outside of formal education and professional certification are rarely sought, which has implications for the kinds of knowledge that are recognized and legitimized in ML development. Moreover, we find that expert knowledge tends to be utilized in ways focused on mining textbook knowledge, such as through data annotation. We discuss the ways experts are engaged in ML development in relation to deskilling, the social construction of expertise, and implications for responsible AI development. We point to a need for reflection and specificity in justifications of domain expert engagement, both as a matter of documentation and reproducibility, as well as a matter of broadening the range of recognized expertise.
Related papers
- Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs [11.860265967829884]
Large Language Models (LLMs) enhance their problem-solving capability by leveraging both parametric and external knowledge.
We introduce a framework for analyzing knowledge-handling based on two key dimensions: the presence of parametric knowledge and the informativeness of external knowledge.
We demonstrate that training on data constructed based on the knowledge-handling scenarios improves LLMs' reliability in integrating and utilizing knowledge.
arXiv Detail & Related papers (2025-02-19T11:49:23Z) - PIKE-RAG: sPecIalized KnowledgE and Rationale Augmented Generation [16.081923602156337]
We introduce sPecIalized KnowledgE and Rationale Augmentation Generation (PIKE-RAG)
We focus on extracting, understanding, and applying specialized knowledge, while constructing coherent rationale to incrementally steer LLMs toward accurate responses.
This strategic approach offers a roadmap for the phased development and enhancement of RAG systems, tailored to meet the evolving demands of industrial applications.
arXiv Detail & Related papers (2025-01-20T15:39:39Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Expert-sourcing Domain-specific Knowledge: The Case of Synonym
Validation [14.51095331294056]
We illustrate tool support that we adopted and extended to source domain-specific knowledge from experts.
We provide insight in design decisions that aim at motivating experts to dedicate their time at performing the labelling task.
We foresee that the approach of expert-sourcing is applicable to any data labelling task in software engineering.
arXiv Detail & Related papers (2023-09-28T19:02:33Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - (Re)Defining Expertise in Machine Learning Development [3.096615629099617]
We conduct a systematic literature review of machine learning research to understand 1) the bases on which expertise is defined and recognized and 2) the roles experts play in ML development.
Our goal is to produce a high-level taxonomy to highlight limits and opportunities in how experts are identified and engaged in ML research.
arXiv Detail & Related papers (2023-02-08T21:10:20Z) - The Expertise Level [0.0]
This paper examines the nature of expertise and presents an abstract knowledge-level and skill-level description of expertise.
A new level lying above the Knowledge Level, called the Expertise Level, is introduced to describe the skills of an expert without having to worry about details of the knowledge required.
The Model of Expertise is introduced combining the knowledge-level and expertise-level descriptions.
arXiv Detail & Related papers (2022-11-11T20:55:11Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.