A Knowledge-driven Memetic Algorithm for the Energy-efficient Distributed Homogeneous Flow Shop Scheduling Problem
- URL: http://arxiv.org/abs/2404.18953v1
- Date: Sun, 28 Apr 2024 00:52:44 GMT
- Title: A Knowledge-driven Memetic Algorithm for the Energy-efficient Distributed Homogeneous Flow Shop Scheduling Problem
- Authors: Yunbao Xu, Xuemei Jiang, Jun Li, Lining Xing, Yanjie Song,
- Abstract summary: A knowledge-driven memetic algorithm (KDMA) is proposed to address the energy-efficient distributed homogeneous flow shop scheduling problem (DHFSSP)
It is evident that KDMA outperforms many state-of-the-art algorithms across various evaluation aspects.
- Score: 3.8628109670599
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The reduction of carbon emissions in the manufacturing industry holds significant importance in achieving the national "double carbon" target. Ensuring energy efficiency is a crucial factor to be incorporated into future generation manufacturing systems. In this study, energy consumption is considered in the distributed homogeneous flow shop scheduling problem (DHFSSP). A knowledge-driven memetic algorithm (KDMA) is proposed to address the energy-efficient DHFSSP (EEDHFSSP). KDMA incorporates a collaborative initialization strategy to generate high-quality initial populations. Furthermore, several algorithmic improvements including update strategy, local search strategy, and carbon reduction strategy are employed to improve the search performance of the algorithm. The effectiveness of KDMA in solving EEDHFSSP is verified through extensive simulation experiments. It is evident that KDMA outperforms many state-of-the-art algorithms across various evaluation aspects.
Related papers
- Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm for Hybrid Flow Shop Scheduling Problems with Multiple Parallel Batch Processing Stages [5.851739146497829]
This study generalizes the problem model, in which users can arbitrarily set certain stages as parallel batch processing stages.
An Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm (AMOEA/D) is designed to simultaneously optimize both makespan and Total Energy Consumption.
The experimental results show that the AMOEA/D is superior to the comparison algorithms in solving the PBHFSP.
arXiv Detail & Related papers (2024-09-27T08:05:56Z) - Faster Optimal Coalition Structure Generation via Offline Coalition Selection and Graph-Based Search [61.08720171136229]
We present a novel algorithm, SMART, for the problem based on a hybridization of three innovative techniques.
Two of these techniques are based on dynamic programming, where we show a powerful connection between the coalitions selected for evaluation and the performance of the algorithms.
Our techniques bring a new way of approaching the problem and a new level of precision to the field.
arXiv Detail & Related papers (2024-07-22T23:24:03Z) - High Efficiency Inference Accelerating Algorithm for NOMA-based Mobile
Edge Computing [23.88527790721402]
Splitting the inference model between device, edge server, and cloud can improve the performance of EI greatly.
NOMA, which is the key supporting technologies of B5G/6G, can achieve massive connections and high spectrum efficiency.
We propose the effective communication and computing resource allocation algorithm to accelerate the model inference at edge.
arXiv Detail & Related papers (2023-12-26T02:05:52Z) - Deep Reinforcement Learning for Artificial Upwelling Energy Management [9.212936156042328]
We propose a novel energy management approach that utilizes deep reinforcement learning (DRL) algorithm to develop efficient strategies for operating artificial upwelling (AU)
Specifically, we formulate the problem of maximizing the energy efficiency of AUS as a Markov decision process and integrate the quantile network in distributional reinforcement learning (QR-DQN) with the deep dueling network to solve it.
Our findings suggest that a DRL-based approach offers a promising way to improve the energy efficiency of AUS and enhance the sustainability of seaweed cultivation and carbon sequestration in the ocean.
arXiv Detail & Related papers (2023-08-20T08:16:36Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Active RIS-aided EH-NOMA Networks: A Deep Reinforcement Learning
Approach [66.53364438507208]
An active reconfigurable intelligent surface (RIS)-aided multi-user downlink communication system is investigated.
Non-orthogonal multiple access (NOMA) is employed to improve spectral efficiency, and the active RIS is powered by energy harvesting (EH)
An advanced LSTM based algorithm is developed to predict users' dynamic communication state.
A DDPG based algorithm is proposed to joint control the amplification matrix and phase shift matrix RIS.
arXiv Detail & Related papers (2023-04-11T13:16:28Z) - Joint Energy Dispatch and Unit Commitment in Microgrids Based on Deep
Reinforcement Learning [6.708717040312532]
In this paper, deep reinforcement learning (DRL) is applied to learn an optimal policy for making joint energy dispatch (ED) and unit commitment (UC) decisions in an isolated microgrid.
We propose a DRL algorithm, i.e., the hybrid action finite-horizon DDPG (HAFH-DDPG), that seamlessly integrates two classical DRL algorithms.
A diesel generator (DG) selection strategy is presented to support a simplified action space for reducing the computation complexity of this algorithm.
arXiv Detail & Related papers (2022-06-03T16:22:03Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.