Explaining Expert Search and Team Formation Systems with ExES
- URL: http://arxiv.org/abs/2405.12881v1
- Date: Tue, 21 May 2024 15:53:35 GMT
- Title: Explaining Expert Search and Team Formation Systems with ExES
- Authors: Kiarash Golzadeh, Lukasz Golab, Jaroslaw Szlichta,
- Abstract summary: Expert search and team formation systems operate on collaboration networks.
Given a keyword query corresponding to the desired skills, these systems identify experts that best match the query.
We propose ExES, a tool designed to explain expert search and team formation systems using factual and counterfactual methods.
- Score: 8.573682949137085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Expert search and team formation systems operate on collaboration networks, with nodes representing individuals, labeled with their skills, and edges denoting collaboration relationships. Given a keyword query corresponding to the desired skills, these systems identify experts that best match the query. However, state-of-the-art solutions to this problem lack transparency. To address this issue, we propose ExES, a tool designed to explain expert search and team formation systems using factual and counterfactual methods from the field of explainable artificial intelligence (XAI). ExES uses factual explanations to highlight important skills and collaborations, and counterfactual explanations to suggest new skills and collaborations to increase the likelihood of being identified as an expert. Towards a practical deployment as an interactive explanation tool, we present and experimentally evaluate a suite of pruning strategies to speed up the explanation search. In many cases, our pruning strategies make ExES an order of magnitude faster than exhaustive search, while still producing concise and actionable explanations.
Related papers
- PromptHive: Bringing Subject Matter Experts Back to the Forefront with Collaborative Prompt Engineering for Educational Content Creation [8.313693615194309]
In this work, we introduce PromptHive, a collaborative interface for prompt authoring, designed to better connect domain knowledge with prompt engineering.
We conducted an evaluation study with ten subject matter experts in math and validated our design through two collaborative prompt-writing sessions and a learning gain study with 358 learners.
Our results elucidate the prompt iteration process and validate the tool's usability, enabling non-AI experts to craft prompts that generate content comparable to human-authored materials.
arXiv Detail & Related papers (2024-10-21T22:18:24Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - PromptAgent: Strategic Planning with Language Models Enables
Expert-level Prompt Optimization [60.00631098364391]
PromptAgent is an optimization method that crafts expert-level prompts equivalent in quality to those handcrafted by experts.
Inspired by human-like trial-and-error exploration, PromptAgent induces precise expert-level insights and in-depth instructions.
We apply PromptAgent to 12 tasks spanning three practical domains.
arXiv Detail & Related papers (2023-10-25T07:47:01Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Duplicate Detection as a Service [0.0]
Duplicate detection aims to find identity links between instances of knowledge graphs.
Current solutions to the problem require expert knowledge of the tool and the knowledge graph they are applied to.
We present our service-based approach to the duplicate detection task that provides an easy-to-use no-code solution.
arXiv Detail & Related papers (2022-07-20T06:02:11Z) - Towards Collaborative Question Answering: A Preliminary Study [63.91687114660126]
We propose CollabQA, a novel QA task in which several expert agents coordinated by a moderator work together to answer questions that cannot be answered with any single agent alone.
We make a synthetic dataset of a large knowledge graph that can be distributed to experts.
We show that the problem can be challenging without introducing prior to the collaboration structure, unless experts are perfect and uniform.
arXiv Detail & Related papers (2022-01-24T14:27:00Z) - Rethinking Search: Making Experts out of Dilettantes [55.90140165205178]
When experiencing an information need, users want to engage with an expert, but often turn to an information retrieval system, such as a search engine.
This paper examines how ideas from classical information retrieval and large pre-trained language models can be synthesized and evolved into systems that truly deliver on the promise of expert advice.
arXiv Detail & Related papers (2021-05-05T18:40:00Z) - Expertise Style Transfer: A New Task Towards Better Communication
between Experts and Laymen [88.30492014778943]
We propose a new task of expertise style transfer and contribute a manually annotated dataset.
Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions.
We establish the benchmark performance of five state-of-the-art models for style transfer and text simplification.
arXiv Detail & Related papers (2020-05-02T04:50:20Z) - Directions for Explainable Knowledge-Enabled Systems [3.7250420821969827]
We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
arXiv Detail & Related papers (2020-03-17T04:34:29Z) - Foundations of Explainable Knowledge-Enabled Systems [3.7250420821969827]
We present a historical overview of explainable artificial intelligence systems.
We focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
We propose new definitions for explanations and explainable knowledge-enabled systems.
arXiv Detail & Related papers (2020-03-17T04:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.