The Expertise Level
- URL: http://arxiv.org/abs/2212.10435v1
- Date: Fri, 11 Nov 2022 20:55:11 GMT
- Title: The Expertise Level
- Authors: Ron Fulbright
- Abstract summary: This paper examines the nature of expertise and presents an abstract knowledge-level and skill-level description of expertise.
A new level lying above the Knowledge Level, called the Expertise Level, is introduced to describe the skills of an expert without having to worry about details of the knowledge required.
The Model of Expertise is introduced combining the knowledge-level and expertise-level descriptions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computers are quickly gaining on us. Artificial systems are now exceeding the
performance of human experts in several domains. However, we do not yet have a
deep definition of expertise. This paper examines the nature of expertise and
presents an abstract knowledge-level and skill-level description of expertise.
A new level lying above the Knowledge Level, called the Expertise Level, is
introduced to describe the skills of an expert without having to worry about
details of the knowledge required. The Model of Expertise is introduced
combining the knowledge-level and expertise-level descriptions. Application of
the model to the fields of cognitive architectures and human cognitive
augmentation is demonstrated and several famous intelligent systems are
analyzed with the model.
Related papers
- What Makes An Expert? Reviewing How ML Researchers Define "Expert" [4.6346970187885885]
We review 112 academic publications that explicitly reference 'expert' and 'expertise'
We find that expertise is often undefined and forms of knowledge outside of formal education are rarely sought.
We discuss the ways experts are engaged in ML development in relation to deskilling, the social construction of expertise, and implications for responsible AI development.
arXiv Detail & Related papers (2024-10-31T19:51:28Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Thrill-K Architecture: Towards a Solution to the Problem of Knowledge
Based Understanding [0.9390008801320021]
We introduce a classification of hybrid systems which, based on an analysis of human knowledge and intelligence, combines neural learning with various types of knowledge and knowledge sources.
We present the Thrill-K architecture as a prototypical solution for integrating instantaneous knowledge, standby knowledge and external knowledge sources in a framework capable of inference, learning and intelligent control.
arXiv Detail & Related papers (2023-02-28T20:39:35Z) - (Re)Defining Expertise in Machine Learning Development [3.096615629099617]
We conduct a systematic literature review of machine learning research to understand 1) the bases on which expertise is defined and recognized and 2) the roles experts play in ML development.
Our goal is to produce a high-level taxonomy to highlight limits and opportunities in how experts are identified and engaged in ML research.
arXiv Detail & Related papers (2023-02-08T21:10:20Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.