A Reinforcement Learning-assisted Genetic Programming Algorithm for Team
Formation Problem Considering Person-Job Matching
- URL: http://arxiv.org/abs/2304.04022v1
- Date: Sat, 8 Apr 2023 14:32:12 GMT
- Title: A Reinforcement Learning-assisted Genetic Programming Algorithm for Team
Formation Problem Considering Person-Job Matching
- Authors: Yangyang Guo, Hao Wang, Lei He, Witold Pedrycz, P. N. Suganthan,
Yanjie Song
- Abstract summary: A reinforcement learning-assisted genetic programming algorithm (RL-GP) is proposed to enhance the quality of solutions.
The hyper-heuristic rules obtained through efficient learning can be utilized as decision-making aids when forming project teams.
- Score: 70.28786574064694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An efficient team is essential for the company to successfully complete new
projects. To solve the team formation problem considering person-job matching
(TFP-PJM), a 0-1 integer programming model is constructed, which considers both
person-job matching and team members' willingness to communicate on team
efficiency, with the person-job matching score calculated using intuitionistic
fuzzy numbers. Then, a reinforcement learning-assisted genetic programming
algorithm (RL-GP) is proposed to enhance the quality of solutions. The RL-GP
adopts the ensemble population strategies. Before the population evolution at
each generation, the agent selects one from four population search modes
according to the information obtained, thus realizing a sound balance of
exploration and exploitation. In addition, surrogate models are used in the
algorithm to evaluate the formation plans generated by individuals, which
speeds up the algorithm learning process. Afterward, a series of comparison
experiments are conducted to verify the overall performance of RL-GP and the
effectiveness of the improved strategies within the algorithm. The
hyper-heuristic rules obtained through efficient learning can be utilized as
decision-making aids when forming project teams. This study reveals the
advantages of reinforcement learning methods, ensemble strategies, and the
surrogate model applied to the GP framework. The diversity and intelligent
selection of search patterns along with fast adaptation evaluation, are
distinct features that enable RL-GP to be deployed in real-world enterprise
environments.
Related papers
- Beyond Training: Optimizing Reinforcement Learning Based Job Shop Scheduling Through Adaptive Action Sampling [10.931466852026663]
We investigate the optimal use of trained deep reinforcement learning (DRL) agents during inference.
Our work is based on the hypothesis that, similar to search algorithms, the utilization of trained DRL agents should be dependent on the acceptable computational budget.
We propose an algorithm for obtaining the optimal parameterization for such a given number of solutions and any given trained agent.
arXiv Detail & Related papers (2024-06-11T14:59:18Z) - Sample Efficient Reinforcement Learning by Automatically Learning to
Compose Subtasks [3.1594865504808944]
We propose an RL algorithm that automatically structure the reward function for sample efficiency, given a set of labels that signify subtasks.
We evaluate our algorithm in a variety of sparse-reward environments.
arXiv Detail & Related papers (2024-01-25T15:06:40Z) - Improving Generalization of Alignment with Human Preferences through
Group Invariant Learning [56.19242260613749]
Reinforcement Learning from Human Feedback (RLHF) enables the generation of responses more aligned with human preferences.
Previous work shows that Reinforcement Learning (RL) often exploits shortcuts to attain high rewards and overlooks challenging samples.
We propose a novel approach that can learn a consistent policy via RL across various data groups or domains.
arXiv Detail & Related papers (2023-10-18T13:54:15Z) - Algorithmic Collective Action in Machine Learning [35.91866986642348]
We study algorithmic collective action on digital platforms that deploy machine learning algorithms.
We propose a simple theoretical model of a collective interacting with a firm's learning algorithm.
We conduct systematic experiments on a skill classification task involving tens of thousands of resumes from a gig platform for freelancers.
arXiv Detail & Related papers (2023-02-08T18:55:49Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Adaptive Group Collaborative Artificial Bee Colony Algorithm [12.843155301033512]
artificial bee colony (ABC) algorithm has shown to be competitive.
It is poor at balancing the abilities of global searching in the whole solution space (named as exploration) and quick searching in local solution space.
For improving the performance of ABC, an adaptive group collaborative ABC (AgABC) algorithm is introduced.
arXiv Detail & Related papers (2021-12-02T13:33:37Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - An Efficient Application of Neuroevolution for Competitive Multiagent
Learning [0.0]
NEAT is a popular evolutionary strategy used to obtain the best performing neural network architecture.
This paper utilizes the NEAT algorithm to achieve competitive multiagent learning on a modified pong game environment.
arXiv Detail & Related papers (2021-05-23T10:34:48Z) - A Two-stage Framework and Reinforcement Learning-based Optimization
Algorithms for Complex Scheduling Problems [54.61091936472494]
We develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together.
The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively.
Results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems.
arXiv Detail & Related papers (2021-03-10T03:16:12Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.