Towards Collaborative Question Answering: A Preliminary Study
- URL: http://arxiv.org/abs/2201.09708v1
- Date: Mon, 24 Jan 2022 14:27:00 GMT
- Title: Towards Collaborative Question Answering: A Preliminary Study
- Authors: Xiangkun Hu, Hang Yan, Qipeng Guo, Xipeng Qiu, Weinan Zhang, Zheng
Zhang
- Abstract summary: We propose CollabQA, a novel QA task in which several expert agents coordinated by a moderator work together to answer questions that cannot be answered with any single agent alone.
We make a synthetic dataset of a large knowledge graph that can be distributed to experts.
We show that the problem can be challenging without introducing prior to the collaboration structure, unless experts are perfect and uniform.
- Score: 63.91687114660126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge and expertise in the real-world can be disjointedly owned. To solve
a complex question, collaboration among experts is often called for. In this
paper, we propose CollabQA, a novel QA task in which several expert agents
coordinated by a moderator work together to answer questions that cannot be
answered with any single agent alone. We make a synthetic dataset of a large
knowledge graph that can be distributed to experts. We define the process to
form a complex question from ground truth reasoning path, neural network agent
models that can learn to solve the task, and evaluation metrics to check the
performance. We show that the problem can be challenging without introducing
prior of the collaboration structure, unless experts are perfect and uniform.
Based on this experience, we elaborate extensions needed to approach
collaboration tasks in real-world settings.
Related papers
- Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning [67.26776442697184]
We introduce Husky, a holistic, open-source language agent that learns to reason over a unified action space.
Husky iterates between two stages: 1) generating the next action to take towards solving a given task and 2) executing the action using expert models.
Our experiments show that Husky outperforms prior language agents across 14 evaluation datasets.
arXiv Detail & Related papers (2024-06-10T17:07:25Z) - Explaining Expert Search and Team Formation Systems with ExES [8.573682949137085]
Expert search and team formation systems operate on collaboration networks.
Given a keyword query corresponding to the desired skills, these systems identify experts that best match the query.
We propose ExES, a tool designed to explain expert search and team formation systems using factual and counterfactual methods.
arXiv Detail & Related papers (2024-05-21T15:53:35Z) - Contact Complexity in Customer Service [21.106010378612876]
Customers who reach out for customer service support may face a range of issues that vary in complexity.
To tackle this, a machine learning model that accurately predicts the complexity of customer issues is highly desirable.
We have developed a novel machine learning approach to define contact complexity.
arXiv Detail & Related papers (2024-02-24T00:09:27Z) - Active Ranking of Experts Based on their Performances in Many Tasks [72.96112117037465]
We consider the problem of ranking n experts based on their performances on d tasks.
We make a monotonicity assumption stating that for each pair of experts, one outperforms the other on all tasks.
arXiv Detail & Related papers (2023-06-05T06:55:39Z) - Establishing Shared Query Understanding in an Open Multi-Agent System [1.2031796234206138]
We propose a method that allows to develop shared understanding between two agents for the purpose of performing a task that requires cooperation.
Our method focuses on efficiently establishing successful task-oriented communication in an open multi-agent system.
arXiv Detail & Related papers (2023-05-16T11:07:05Z) - Mod-Squad: Designing Mixture of Experts As Modular Multi-Task Learners [74.92558307689265]
We propose Mod-Squad, a new model that is Modularized into groups of experts (a 'Squad')
We optimize this matching process during the training of a single model.
Experiments on the Taskonomy dataset with 13 vision tasks and the PASCAL-Context dataset with 5 vision tasks show the superiority of our approach.
arXiv Detail & Related papers (2022-12-15T18:59:52Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - EEML: Ensemble Embedded Meta-learning [5.9514420658483935]
We propose an ensemble embedded meta-learning algorithm (EEML) that explicitly utilizes multi-model-ensemble to organize prior knowledge into diverse specific experts.
We rely on a task embedding cluster mechanism to deliver diverse tasks to matching experts in training process and instruct how experts collaborate in test phase.
The experimental results show that the proposed method outperforms recent state-of-the-arts easily in few-shot learning problem.
arXiv Detail & Related papers (2022-06-18T12:37:17Z) - Learning to Solve Complex Tasks by Talking to Agents [39.08818632689814]
Humans often solve complex problems by interacting with existing agents, such as AI assistants, that can solve simpler sub-tasks.
Common NLP benchmarks aim for the development of self-sufficient models for every task.
We propose a new benchmark called CommaQA that contains three kinds of complex reasoning tasks designed to be solved by talking'' to four agents with different capabilities.
arXiv Detail & Related papers (2021-10-16T10:37:34Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.