MARLIM: Multi-Agent Reinforcement Learning for Inventory Management
- URL: http://arxiv.org/abs/2308.01649v1
- Date: Thu, 3 Aug 2023 09:31:45 GMT
- Title: MARLIM: Multi-Agent Reinforcement Learning for Inventory Management
- Authors: R\'emi Leluc, Elie Kadoche, Antoine Bertoncello, S\'ebastien
Gourv\'enec
- Abstract summary: This paper presents a novel reinforcement learning framework called MARLIM to address the inventory management problem.
Within this context, controllers are developed through single or multiple agents in a cooperative setting.
Numerical experiments on real data demonstrate the benefits of reinforcement learning methods over traditional baselines.
- Score: 1.1470070927586016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Maintaining a balance between the supply and demand of products by optimizing
replenishment decisions is one of the most important challenges in the supply
chain industry. This paper presents a novel reinforcement learning framework
called MARLIM, to address the inventory management problem for a single-echelon
multi-products supply chain with stochastic demands and lead-times. Within this
context, controllers are developed through single or multiple agents in a
cooperative setting. Numerical experiments on real data demonstrate the
benefits of reinforcement learning methods over traditional baselines.
Related papers
- Enhancing Supply Chain Visibility with Knowledge Graphs and Large Language Models [49.898152180805454]
This paper presents a novel framework leveraging Knowledge Graphs (KGs) and Large Language Models (LLMs) to enhance supply chain visibility.
Our zero-shot, LLM-driven approach automates the extraction of supply chain information from diverse public sources.
With high accuracy in NER and RE tasks, it provides an effective tool for understanding complex, multi-tiered supply networks.
arXiv Detail & Related papers (2024-08-05T17:11:29Z) - InvAgent: A Large Language Model based Multi-Agent System for Inventory Management in Supply Chains [0.0]
This study introduces a novel approach using large language models (LLMs) to manage multi-agent inventory systems.
Our model, InvAgent, enhances resilience and improves efficiency across the supply chain network.
arXiv Detail & Related papers (2024-07-16T04:55:17Z) - Identifying contributors to supply chain outcomes in a multi-echelon setting: a decentralised approach [47.00450933765504]
We propose the use of explainable artificial intelligence for decentralised computing of estimated contributions to a metric of interest.
This approach mitigates the need to convince supply chain actors to share data, as all computations occur in a decentralised manner.
Results demonstrate the effectiveness of our approach in detecting the source of quality variations compared to a centralised approach.
arXiv Detail & Related papers (2023-07-22T20:03:16Z) - Multi-Agent Reinforcement Learning with Shared Resources for Inventory
Management [62.23979094308932]
In our setting, the constraint on the shared resources (such as the inventory capacity) couples the otherwise independent control for each SKU.
We formulate the problem with this structure as Shared-Resource Game (SRSG)and propose an efficient algorithm called Context-aware Decentralized PPO (CD-PPO)
Through extensive experiments, we demonstrate that CD-PPO can accelerate the learning procedure compared with standard MARL algorithms.
arXiv Detail & Related papers (2022-12-15T09:35:54Z) - Concepts and Algorithms for Agent-based Decentralized and Integrated
Scheduling of Production and Auxiliary Processes [78.120734120667]
This paper describes an agent-based decentralized and integrated scheduling approach.
Part of the requirements is to develop a linearly scaling communication architecture.
The approach is explained using an example based on industrial requirements.
arXiv Detail & Related papers (2022-05-06T18:44:29Z) - Comparing Deep Reinforcement Learning Algorithms in Two-Echelon Supply
Chains [1.4685355149711299]
We analyze and compare the performance of state-of-the-art deep reinforcement learning algorithms for solving the supply chain inventory management problem.
This study provides detailed insight into the design and development of an open-source software library that provides a customizable environment for solving the supply chain inventory management problem.
arXiv Detail & Related papers (2022-04-20T16:33:01Z) - Creating Training Sets via Weak Indirect Supervision [66.77795318313372]
Weak Supervision (WS) frameworks synthesize training labels from multiple potentially noisy supervision sources.
We formulate Weak Indirect Supervision (WIS), a new research problem for automatically synthesizing training labels.
We develop a probabilistic modeling approach, PLRM, which uses user-provided label relations to model and leverage indirect supervision sources.
arXiv Detail & Related papers (2021-10-07T14:09:35Z) - UPDeT: Universal Multi-agent Reinforcement Learning via Policy
Decoupling with Transformers [108.92194081987967]
We make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks.
Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy.
The proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable.
arXiv Detail & Related papers (2021-01-20T07:24:24Z) - Reinforcement Learning for Multi-Product Multi-Node Inventory Management
in Supply Chains [17.260459603456745]
This paper describes the application of reinforcement learning (RL) to multi-product inventory management in supply chains.
Experiments show that the proposed approach is able to handle a multi-objective reward comprised of maximising product sales and minimising wastage of perishable products.
arXiv Detail & Related papers (2020-06-07T04:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.