Forming Diverse Teams from Sequentially Arriving People
- URL: http://arxiv.org/abs/2002.10697v1
- Date: Tue, 25 Feb 2020 07:00:07 GMT
- Title: Forming Diverse Teams from Sequentially Arriving People
- Authors: Faez Ahmed, John Dickerson, Mark Fuge
- Abstract summary: Collaborative work often benefits from having teams or organizations with heterogeneous members.
We present a method to form such diverse teams from people arriving sequentially over time.
We show that, in practice, the algorithm leads to large gains in team diversity.
- Score: 9.247294820004146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative work often benefits from having teams or organizations with
heterogeneous members. In this paper, we present a method to form such diverse
teams from people arriving sequentially over time. We define a monotone
submodular objective function that combines the diversity and quality of a team
and propose an algorithm to maximize the objective while satisfying multiple
constraints. This allows us to balance both how diverse the team is and how
well it can perform the task at hand. Using crowd experiments, we show that, in
practice, the algorithm leads to large gains in team diversity. Using
simulations, we show how to quantify the additional cost of forming diverse
teams and how to address the problem of simultaneously maximizing diversity for
several attributes (e.g., country of origin, gender). Our method has
applications in collaborative work ranging from team formation, the assignment
of workers to teams in crowdsourcing, and reviewer allocation to journal papers
arriving sequentially. Our code is publicly accessible for further research.
Related papers
- Multi-agent Multi-armed Bandits with Stochastic Sharable Arm Capacities [69.34646544774161]
We formulate a new variant of multi-player multi-armed bandit (MAB) model, which captures arrival of requests to each arm and the policy of allocating requests to players.
The challenge is how to design a distributed learning algorithm such that players select arms according to the optimal arm pulling profile.
We design an iterative distributed algorithm, which guarantees that players can arrive at a consensus on the optimal arm pulling profile in only M rounds.
arXiv Detail & Related papers (2024-08-20T13:57:00Z) - Governing the Commons: Code Ownership and Code-Clones in Large-Scale Software Development [6.249768559720122]
In software development organizations employing weak or collective ownership, different teams are allowed and expected to autonomously perform changes in various components.
Our objective is to understand how and why different teams introduce technical debt in the form of code clones as they change different components.
arXiv Detail & Related papers (2024-05-24T18:23:51Z) - Team Formation amidst Conflicts [4.197110761923661]
In this work, we formulate the problem of team formation amidst conflicts.
The goal is to assign individuals to tasks, with given capacities, taking into account individuals' task preferences and the conflicts between them.
Using dependent rounding schemes as our main toolbox, we provide efficient approximation algorithms.
arXiv Detail & Related papers (2024-02-29T20:15:13Z) - Diversity-Based Recruitment in Crowdsensing By Combinatorial Multi-Armed
Bandits [6.802315212233411]
This paper explores mobile crowdsensing, which leverages mobile devices and their users for collective sensing tasks under the coordination of a central requester.
The primary challenge here is the variability in the sensing capabilities of individual workers, which are initially unknown and must be progressively learned.
We propose a novel model that enhances task diversity over the rounds by dynamically adjusting the weight of tasks in each round based on their frequency of assignment.
arXiv Detail & Related papers (2023-12-25T13:54:58Z) - Diversify Question Generation with Retrieval-Augmented Style Transfer [68.00794669873196]
We propose RAST, a framework for Retrieval-Augmented Style Transfer.
The objective is to utilize the style of diverse templates for question generation.
We develop a novel Reinforcement Learning (RL) based approach that maximizes a weighted combination of diversity reward and consistency reward.
arXiv Detail & Related papers (2023-10-23T02:27:31Z) - Informational Diversity and Affinity Bias in Team Growth Dynamics [6.729250803621849]
We show that the benefits of informational diversity are in tension with affinity bias.
Our results formalize a fundamental limitation of utility-based motivations to drive informational diversity.
arXiv Detail & Related papers (2023-01-28T05:02:40Z) - Modular Adaptive Policy Selection for Multi-Task Imitation Learning
through Task Division [60.232542918414985]
Multi-task learning often suffers from negative transfer, sharing information that should be task-specific.
This is done by using proto-policies as modules to divide the tasks into simple sub-behaviours that can be shared.
We also demonstrate its ability to autonomously divide the tasks into both shared and task-specific sub-behaviours.
arXiv Detail & Related papers (2022-03-28T15:53:17Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - Crowdsourcing with Meta-Workers: A New Way to Save the Budget [50.04836252733443]
We introduce the concept of emphmeta-worker, a machine annotator trained by meta learning for types of tasks that are well-fit for AI.
Unlike regular crowd workers, meta-workers can be reliable, stable, and more importantly, tireless and free.
arXiv Detail & Related papers (2021-11-07T12:40:29Z) - Faster Algorithms for Optimal Ex-Ante Coordinated Collusive Strategies
in Extensive-Form Zero-Sum Games [123.76716667704625]
We focus on the problem of finding an optimal strategy for a team of two players that faces an opponent in an imperfect-information zero-sum extensive-form game.
In that setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game.
We provide an algorithm that computes such an optimal distribution by only using profiles where only one of the team members gets to randomize in each profile.
arXiv Detail & Related papers (2020-09-21T17:51:57Z) - TAIP: an anytime algorithm for allocating student teams to internship
programs [0.0]
We focus on the problem of matching teams with tasks within the context of education, and specifically in the context of forming teams of students and allocating them to internship programs.
First we provide a formalization of the Team Allocation for Internship Programs Problem, and show the computational hardness of solving it optimally.
We propose TAIP, a algorithm that generates an initial team allocation which later on attempts to improve in an iterative process.
arXiv Detail & Related papers (2020-05-19T09:50:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.