Auxiliary-task Based Deep Reinforcement Learning for Participant
Selection Problem in Mobile Crowdsourcing
- URL: http://arxiv.org/abs/2008.11087v2
- Date: Wed, 26 Aug 2020 00:55:05 GMT
- Title: Auxiliary-task Based Deep Reinforcement Learning for Participant
Selection Problem in Mobile Crowdsourcing
- Authors: Wei Shen, Xiaonan He, Chuheng Zhang, Qiang Ni, Wanchun Dou, Yan Wang
- Abstract summary: In mobile crowdsourcing, the platform selects participants to complete location-aware tasks from the recruiters aiming to achieve multiple goals.
Different MCS systems have different goals and there are possibly conflicting goals even in one MCS system.
It is crucial to design a participant selection algorithm that applies to different MCS systems to achieve multiple goals.
- Score: 30.124365580284888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In mobile crowdsourcing (MCS), the platform selects participants to complete
location-aware tasks from the recruiters aiming to achieve multiple goals
(e.g., profit maximization, energy efficiency, and fairness). However,
different MCS systems have different goals and there are possibly conflicting
goals even in one MCS system. Therefore, it is crucial to design a participant
selection algorithm that applies to different MCS systems to achieve multiple
goals. To deal with this issue, we formulate the participant selection problem
as a reinforcement learning problem and propose to solve it with a novel
method, which we call auxiliary-task based deep reinforcement learning (ADRL).
We use transformers to extract representations from the context of the MCS
system and a pointer network to deal with the combinatorial optimization
problem. To improve the sample efficiency, we adopt an auxiliary-task training
process that trains the network to predict the imminent tasks from the
recruiters, which facilitates the embedding learning of the deep learning
model. Additionally, we release a simulated environment on a specific MCS task,
the ride-sharing task, and conduct extensive performance evaluations in this
environment. The experimental results demonstrate that ADRL outperforms and
improves sample efficiency over other well-recognized baselines in various
settings.
Related papers
- Sample Efficient Myopic Exploration Through Multitask Reinforcement
Learning with Diverse Tasks [53.44714413181162]
This paper shows that when an agent is trained on a sufficiently diverse set of tasks, a generic policy-sharing algorithm with myopic exploration design can be sample-efficient.
To the best of our knowledge, this is the first theoretical demonstration of the "exploration benefits" of MTRL.
arXiv Detail & Related papers (2024-03-03T22:57:44Z) - Decentralized Online Learning in Task Assignment Games for Mobile
Crowdsensing [55.07662765269297]
A mobile crowdsensing platform (MCSP) sequentially publishes sensing tasks to the available mobile units (MUs) that signal their willingness to participate in a task by sending sensing offers back to the MCSP.
A stable task assignment must address two challenges: the MCSP's and MUs' conflicting goals, and the uncertainty about the MUs' required efforts and preferences.
To overcome these challenges a novel decentralized approach combining matching theory and online learning, called collision-avoidance multi-armed bandit with strategic free sensing (CA-MAB-SFS) is proposed.
arXiv Detail & Related papers (2023-09-19T13:07:15Z) - Task Selection and Assignment for Multi-modal Multi-task Dialogue Act
Classification with Non-stationary Multi-armed Bandits [11.682678945754837]
Multi-task learning (MTL) aims to improve the performance of a primary task by jointly learning with related auxiliary tasks.
Previous studies suggest that such a random selection of tasks may not be helpful, and can even be harmful to performance.
This paper proposes a method for selecting and assigning tasks based on non-stationary multi-armed bandits.
arXiv Detail & Related papers (2023-09-18T14:51:51Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - DL-DRL: A double-level deep reinforcement learning approach for
large-scale task scheduling of multi-UAV [65.07776277630228]
We propose a double-level deep reinforcement learning (DL-DRL) approach based on a divide and conquer framework (DCF)
Particularly, we design an encoder-decoder structured policy network in our upper-level DRL model to allocate the tasks to different UAVs.
We also exploit another attention based policy network in our lower-level DRL model to construct the route for each UAV, with the objective to maximize the number of executed tasks.
arXiv Detail & Related papers (2022-08-04T04:35:53Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - Efficient Reinforcement Learning in Resource Allocation Problems Through
Permutation Invariant Multi-task Learning [6.247939901619901]
We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning.
We provide a theoretical performance bound for the gain in sample efficiency under this setting.
This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy.
arXiv Detail & Related papers (2021-02-18T14:13:02Z) - Dynamic Task Weighting Methods for Multi-task Networks in Autonomous
Driving Systems [10.625400639764734]
Deep multi-task networks are of particular interest for autonomous driving systems.
We propose a novel method combining evolutionary meta-learning and task-based selective backpropagation.
Our method outperforms state-of-the-art methods by a significant margin on a two-task application.
arXiv Detail & Related papers (2020-01-07T18:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.