A semi-centralized multi-agent RL framework for efficient irrigation scheduling
- URL: http://arxiv.org/abs/2408.08442v1
- Date: Thu, 15 Aug 2024 22:27:32 GMT
- Title: A semi-centralized multi-agent RL framework for efficient irrigation scheduling
- Authors: Bernard T. Agyeman, Benjamin Decard-Nelson, Jinfeng Liu, Sirish L. Shah,
- Abstract summary: This paper proposes a Semi-arity Multi-Agent Reinforcement Learning (SCMARL) approach for irrigation scheduling in spatially variable agricultural fields.
The SCMARL framework is hierarchical in nature, with a centralized coordinator agent at the top level and decentralized local agents at the second level.
An extensive evaluation on a large-scale field in Lethbridge, Canada, compares the SCMARL approach with a learning-based multi-agent model predictive control scheduling approach.
- Score: 0.8740747742808569
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a Semi-Centralized Multi-Agent Reinforcement Learning (SCMARL) approach for irrigation scheduling in spatially variable agricultural fields, where management zones address spatial variability. The SCMARL framework is hierarchical in nature, with a centralized coordinator agent at the top level and decentralized local agents at the second level. The coordinator agent makes daily binary irrigation decisions based on field-wide conditions, which are communicated to the local agents. Local agents determine appropriate irrigation amounts for specific management zones using local conditions. The framework employs state augmentation approach to handle non-stationarity in the local agents' environments. An extensive evaluation on a large-scale field in Lethbridge, Canada, compares the SCMARL approach with a learning-based multi-agent model predictive control scheduling approach, highlighting its enhanced performance, resulting in water conservation and improved Irrigation Water Use Efficiency (IWUE). Notably, the proposed approach achieved a 4.0% savings in irrigation water while enhancing the IWUE by 6.3%.
Related papers
- Deep Reinforcement Multi-agent Learning framework for Information
Gathering with Local Gaussian Processes for Water Monitoring [3.2266662249755025]
It is proposed to use Local Gaussian Processes and Deep Reinforcement Learning to jointly obtain effective monitoring policies.
A Deep convolutional policy is proposed, that bases the decisions on the observation on the mean and variance of this model.
Using a Double Deep Q-Learning algorithm, agents are trained to minimize the estimation error in a safe manner.
arXiv Detail & Related papers (2024-01-09T15:58:15Z) - 2-Level Reinforcement Learning for Ships on Inland Waterways: Path Planning and Following [0.0]
This paper proposes a realistic modularized framework for controlling autonomous surface vehicles (ASVs) on inland waterways (IWs) based on deep reinforcement learning (DRL)
The framework improves operational safety and comprises two levels: a high-level local path planning (LPP) unit and a low-level path following (PF) unit, each consisting of a DRL agent.
arXiv Detail & Related papers (2023-07-25T08:42:59Z) - Integrating machine learning paradigms and mixed-integer model
predictive control for irrigation scheduling [0.20999222360659603]
Agricultural sector faces significant challenges in water resource conservation and crop yield optimization.
Traditional irrigation scheduling methods often prove inadequate in meeting the needs of large-scale irrigation systems.
This paper proposes a predictive irrigation scheduler that leverages the three paradigms of machine learning to optimize irrigation schedules.
arXiv Detail & Related papers (2023-06-14T19:38:44Z) - Local Optimization Achieves Global Optimality in Multi-Agent
Reinforcement Learning [139.53668999720605]
We present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO.
We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate.
arXiv Detail & Related papers (2023-05-08T16:20:03Z) - The challenge of redundancy on multi-agent value factorisation [12.63182277116319]
In the field of cooperative multi-agent reinforcement learning (MARL), the standard paradigm is the use of centralised training and decentralised execution.
We propose leveraging layerwise relevance propagation (LRP) to instead separate the learning of the joint value function and generation of local reward signals.
We find that although the performance of both baselines VDN and Qmix degrades with the number of redundant agents, RDN is unaffected.
arXiv Detail & Related papers (2023-03-28T20:41:12Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Adaptive Mutual Supervision for Weakly-Supervised Temporal Action
Localization [92.96802448718388]
We introduce an adaptive mutual supervision framework (AMS) for temporal action localization.
The proposed AMS method significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T08:31:10Z) - Cooperative Heterogeneous Deep Reinforcement Learning [47.97582814287474]
We present a Cooperative Heterogeneous Deep Reinforcement Learning framework that can learn a policy by integrating the advantages of heterogeneous agents.
Global agents are off-policy agents that can utilize experiences from the other agents.
Local agents are either on-policy agents or population-based evolutionary (EAs) agents that can explore the local area effectively.
arXiv Detail & Related papers (2020-11-02T07:39:09Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.