Agentic LLMs in the Supply Chain: Towards Autonomous Multi-Agent Consensus-Seeking
- URL: http://arxiv.org/abs/2411.10184v1
- Date: Fri, 15 Nov 2024 13:33:10 GMT
- Title: Agentic LLMs in the Supply Chain: Towards Autonomous Multi-Agent Consensus-Seeking
- Authors: Valeria Jannelli, Stefan Schoepf, Matthias Bickel, Torbjørn Netland, Alexandra Brintrup,
- Abstract summary: Large Language Models (LLMs) can automate consensus-seeking in supply chain management (SCM)
Traditional SCM relies on human consensus in decision-making to avoid emergent problems like the bullwhip effect.
Recent advances in Generative AI, particularly LLMs, show promise in overcoming these barriers.
- Score: 39.373512037111155
- License:
- Abstract: This paper explores how Large Language Models (LLMs) can automate consensus-seeking in supply chain management (SCM), where frequent decisions on problems such as inventory levels and delivery times require coordination among companies. Traditional SCM relies on human consensus in decision-making to avoid emergent problems like the bullwhip effect. Some routine consensus processes, especially those that are time-intensive and costly, can be automated. Existing solutions for automated coordination have faced challenges due to high entry barriers locking out SMEs, limited capabilities, and limited adaptability in complex scenarios. However, recent advances in Generative AI, particularly LLMs, show promise in overcoming these barriers. LLMs, trained on vast datasets can negotiate, reason, and plan, facilitating near-human-level consensus at scale with minimal entry barriers. In this work, we identify key limitations in existing approaches and propose autonomous LLM agents to address these gaps. We introduce a series of novel, supply chain-specific consensus-seeking frameworks tailored for LLM agents and validate the effectiveness of our approach through a case study in inventory management. To accelerate progress within the SCM community, we open-source our code, providing a foundation for further advancements in LLM-powered autonomous supply chain solutions.
Related papers
- MALMM: Multi-Agent Large Language Models for Zero-Shot Robotics Manipulation [52.739500459903724]
Large Language Models (LLMs) have demonstrated remarkable planning abilities across various domains, including robotics manipulation and navigation.
We propose a novel multi-agent LLM framework that distributes high-level planning and low-level control code generation across specialized LLM agents.
We evaluate our approach on nine RLBench tasks, including long-horizon tasks, and demonstrate its ability to solve robotics manipulation in a zero-shot setting.
arXiv Detail & Related papers (2024-11-26T17:53:44Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - InvAgent: A Large Language Model based Multi-Agent System for Inventory Management in Supply Chains [0.0]
This study introduces a novel approach using large language models (LLMs) to manage multi-agent inventory systems.
Our model, InvAgent, enhances resilience and improves efficiency across the supply chain network.
arXiv Detail & Related papers (2024-07-16T04:55:17Z) - Challenges Faced by Large Language Models in Solving Multi-Agent Flocking [17.081075782529098]
Flocking is a behavior where multiple agents in a system attempt to stay close to each other while avoiding collision and maintaining a desired formation.
Recently, large language models (LLMs) have displayed an impressive ability to solve various collaboration tasks as individual decision-makers.
This paper discusses the challenges LLMs face in multi-agent flocking and suggests areas for future improvement.
arXiv Detail & Related papers (2024-04-06T22:34:07Z) - Controlling Large Language Model-based Agents for Large-Scale
Decision-Making: An Actor-Critic Approach [28.477463632107558]
We develop a modular framework called LLaMAC to address hallucination in Large Language Models and coordination in Multi-Agent Systems.
LLaMAC implements a value distribution encoding similar to that found in the human brain, utilizing internal and external feedback mechanisms to facilitate collaboration and iterative reasoning among its modules.
arXiv Detail & Related papers (2023-11-23T10:14:58Z) - Self-prompted Chain-of-Thought on Large Language Models for Open-domain
Multi-hop Reasoning [70.74928578278957]
In open-domain question-answering (ODQA), most existing questions require single-hop reasoning on commonsense.
Large language models (LLMs) have found significant utility in facilitating ODQA without external corpus.
We propose Self-prompted Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality CoTs.
arXiv Detail & Related papers (2023-10-20T14:51:10Z) - AgentBench: Evaluating LLMs as Agents [88.45506148281379]
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks.
We present AgentBench, a benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.
arXiv Detail & Related papers (2023-08-07T16:08:11Z) - An Analysis of Multi-Agent Reinforcement Learning for Decentralized
Inventory Control Systems [0.0]
Most solutions to the inventory management problem assume a centralization of information incompatible with organisational constraints in real supply chain networks.
This paper proposes a decentralized data-driven solution to inventory management problems using multi-agent reinforcement learning.
Results show that using multi-agent proximal policy optimization with a centralized critic leads to performance very close to that of a centralized data-driven solution.
arXiv Detail & Related papers (2023-07-21T08:52:08Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - Combining Propositional Logic Based Decision Diagrams with Decision
Making in Urban Systems [10.781866671930851]
We tackle the problem of multiagent pathfinding under uncertainty and partial observability.
We use propositional logic and integrate them with the RL algorithms to enable fast simulation for RL.
arXiv Detail & Related papers (2020-11-09T13:13:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.