Learning-based agricultural management in partially observable
environments subject to climate variability
- URL: http://arxiv.org/abs/2401.01273v1
- Date: Tue, 2 Jan 2024 16:18:53 GMT
- Title: Learning-based agricultural management in partially observable
environments subject to climate variability
- Authors: Zhaoan Wang, Shaoping Xiao, Junchao Li, Jun Wang
- Abstract summary: Agricultural management holds a central role in shaping crop yield, economic profitability, and environmental sustainability.
We introduce an innovative framework that integrates Deep Reinforcement Learning (DRL) with Recurrent Neural Networks (RNNs)
Our study illuminates the need for agent retraining to acquire new optimal policies under extreme weather events.
- Score: 5.5062239803516615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agricultural management, with a particular focus on fertilization strategies,
holds a central role in shaping crop yield, economic profitability, and
environmental sustainability. While conventional guidelines offer valuable
insights, their efficacy diminishes when confronted with extreme weather
conditions, such as heatwaves and droughts. In this study, we introduce an
innovative framework that integrates Deep Reinforcement Learning (DRL) with
Recurrent Neural Networks (RNNs). Leveraging the Gym-DSSAT simulator, we train
an intelligent agent to master optimal nitrogen fertilization management.
Through a series of simulation experiments conducted on corn crops in Iowa, we
compare Partially Observable Markov Decision Process (POMDP) models with Markov
Decision Process (MDP) models. Our research underscores the advantages of
utilizing sequential observations in developing more efficient nitrogen input
policies. Additionally, we explore the impact of climate variability,
particularly during extreme weather events, on agricultural outcomes and
management. Our findings demonstrate the adaptability of fertilization policies
to varying climate conditions. Notably, a fixed policy exhibits resilience in
the face of minor climate fluctuations, leading to commendable corn yields,
cost-effectiveness, and environmental conservation. However, our study
illuminates the need for agent retraining to acquire new optimal policies under
extreme weather events. This research charts a promising course toward
adaptable fertilization strategies that can seamlessly align with dynamic
climate scenarios, ultimately contributing to the optimization of crop
management practices.
Related papers
- A Comparative Study of Deep Reinforcement Learning for Crop Production Management [13.123171643387668]
Reinforcement learning (RL) has emerged as a promising tool for developing adaptive crop management policies.
In the gym-DSSAT crop model environment, one of the most widely used simulators for crop management, proximal policy optimization (PPO) and deep Q-networks (DQN) have shown promising results.
In this study, we evaluated PPO and DQN against static baseline policies across three different RL tasks, fertilization, irrigation, and mixed management, provided by the gym-DSSAT environment.
arXiv Detail & Related papers (2024-11-06T18:35:51Z) - Efficient Localized Adaptation of Neural Weather Forecasting: A Case Study in the MENA Region [62.09891513612252]
We focus on limited-area modeling and train our model specifically for localized region-level downstream tasks.
We consider the MENA region due to its unique climatic challenges, where accurate localized weather forecasting is crucial for managing water resources, agriculture and mitigating the impacts of extreme weather events.
Our study aims to validate the effectiveness of integrating parameter-efficient fine-tuning (PEFT) methodologies, specifically Low-Rank Adaptation (LoRA) and its variants, to enhance forecast accuracy, as well as training speed, computational resource utilization, and memory efficiency in weather and climate modeling for specific regions.
arXiv Detail & Related papers (2024-09-11T19:31:56Z) - Intelligent Agricultural Management Considering N$_2$O Emission and
Climate Variability with Uncertainties [5.04035338843957]
This study examines how artificial intelligence (AI) can be used in farming to boost crop yields, fine-tune use and watering, and reduce nitrate runoff and greenhouse gases.
Facing climate change and limited agricultural knowledge, we use Partially Observable Markov Decision Processes (POMDPs) with a crop simulator to model AI agents' interactions with farming environments.
Also, we develop Machine Learning (ML) models to predict N$$O emissions, integrating these predictions into the simulator.
arXiv Detail & Related papers (2024-02-13T22:29:40Z) - Comparing Data-Driven and Mechanistic Models for Predicting Phenology in
Deciduous Broadleaf Forests [47.285748922842444]
We train a deep neural network to predict a phenological index from meteorological time series.
We find that this approach outperforms traditional process-based models.
arXiv Detail & Related papers (2024-01-08T15:29:23Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - A SWAT-based Reinforcement Learning Framework for Crop Management [0.0]
We introduce a reinforcement learning (RL) environment that leverages the dynamics in the Soil and Water Assessment Tool (SWAT)
This drastically saves time and resources that would have been otherwise deployed during a full-growing season.
We demonstrate the utility of our framework by developing and benchmarking various decision-making agents following management strategies informed by standard farming practices and state-of-the-art RL algorithms.
arXiv Detail & Related papers (2023-02-10T00:24:22Z) - DeepG2P: Fusing Multi-Modal Data to Improve Crop Production [1.7406327893433846]
We present a Natural Language Processing-based neural network architecture to process the G, E and M inputs and their interactions.
We show that by modeling DNA as natural language, our approach performs better than previous approaches when tested for new environments.
arXiv Detail & Related papers (2022-11-11T03:32:44Z) - Risk-averse Stochastic Optimization for Farm Management Practices and
Cultivar Selection Under Uncertainty [8.427937898153779]
We develop optimization frameworks under uncertainty using conditional value-at-risk in the objective programming function.
As a case study, we set up the crop model for 25 locations across the US Corn Belt.
Results indicated that the proposed model produced meaningful connections between weather and optima decisions.
arXiv Detail & Related papers (2022-07-17T01:14:43Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Ecological Reinforcement Learning [76.9893572776141]
We study the kinds of environment properties that can make learning under such conditions easier.
understanding how properties of the environment impact the performance of reinforcement learning agents can help us to structure our tasks in ways that make learning tractable.
arXiv Detail & Related papers (2020-06-22T17:55:03Z) - Data-driven control of micro-climate in buildings: an event-triggered
reinforcement learning approach [56.22460188003505]
We formulate the micro-climate control problem based on semi-Markov decision processes.
We propose two learning algorithms for event-triggered control of micro-climate in buildings.
We show the efficacy of our proposed approach via designing a smart learning thermostat.
arXiv Detail & Related papers (2020-01-28T18:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.