WOFOSTGym: A Crop Simulator for Learning Annual and Perennial Crop Management Strategies
- URL: http://arxiv.org/abs/2502.19308v2
- Date: Thu, 27 Feb 2025 03:35:09 GMT
- Title: WOFOSTGym: A Crop Simulator for Learning Annual and Perennial Crop Management Strategies
- Authors: William Solow, Sandhya Saisubramanian, Alan Fern,
- Abstract summary: WOFOSTGym is a crop simulation environment designed to train reinforcement learning (RL) agents to optimize agromanagement decisions.<n>Our simulator supports 23 annual crops and two perennial crops, enabling RL agents to learn diverse agromanagement strategies in multi-year, multi-crop, and multi-farm settings.
- Score: 17.270273911931216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce WOFOSTGym, a novel crop simulation environment designed to train reinforcement learning (RL) agents to optimize agromanagement decisions for annual and perennial crops in single and multi-farm settings. Effective crop management requires optimizing yield and economic returns while minimizing environmental impact, a complex sequential decision-making problem well suited for RL. However, the lack of simulators for perennial crops in multi-farm contexts has hindered RL applications in this domain. Existing crop simulators also do not support multiple annual crops. WOFOSTGym addresses these gaps by supporting 23 annual crops and two perennial crops, enabling RL agents to learn diverse agromanagement strategies in multi-year, multi-crop, and multi-farm settings. Our simulator offers a suite of challenging tasks for learning under partial observability, non-Markovian dynamics, and delayed feedback. WOFOSTGym's standard RL interface allows researchers without agricultural expertise to explore a wide range of agromanagement problems. Our experiments demonstrate the learned behaviors across various crop varieties and soil types, highlighting WOFOSTGym's potential for advancing RL-driven decision support in agriculture.
Related papers
- RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning [125.65034908728828]
Training large language models (LLMs) as interactive agents presents unique challenges.
While reinforcement learning has enabled progress in static tasks, multi-turn agent RL training remains underexplored.
We propose StarPO, a general framework for trajectory-level agent RL, and introduce RAGEN, a modular system for training and evaluating LLM agents.
arXiv Detail & Related papers (2025-04-24T17:57:08Z) - Agri-LLaVA: Knowledge-Infused Large Multimodal Assistant on Agricultural Pests and Diseases [49.782064512495495]
We construct the first multimodal instruction-following dataset in the agricultural domain.<n>This dataset covers over 221 types of pests and diseases with approximately 400,000 data entries.<n>We propose a knowledge-infused training method to develop Agri-LLaVA, an agricultural multimodal conversation system.
arXiv Detail & Related papers (2024-12-03T04:34:23Z) - MALT: Improving Reasoning with Multi-Agent LLM Training [66.9481561915524]
MALT (Multi-Agent LLM Training) is a novel post-training strategy that divides the reasoning process into generation, verification, and refinement steps.
On MATH, GSM8K, and CSQA, MALT surpasses the same baseline LLM with a relative improvement of 15.66%, 7.42%, and 9.40% respectively.
arXiv Detail & Related papers (2024-12-02T19:30:36Z) - A Comparative Study of Deep Reinforcement Learning for Crop Production Management [13.123171643387668]
Reinforcement learning (RL) has emerged as a promising tool for developing adaptive crop management policies.
In the gym-DSSAT crop model environment, one of the most widely used simulators for crop management, proximal policy optimization (PPO) and deep Q-networks (DQN) have shown promising results.
In this study, we evaluated PPO and DQN against static baseline policies across three different RL tasks, fertilization, irrigation, and mixed management, provided by the gym-DSSAT environment.
arXiv Detail & Related papers (2024-11-06T18:35:51Z) - AgGym: An agricultural biotic stress simulation environment for ultra-precision management planning [8.205412609306713]
We present AgGym, a modular, crop and stress simulation framework to model the spread of biotic stresses in a field.
We show that AgGym can be customized with limited data to simulate yield outcomes under various biotic stress conditions.
Our proposed framework enables personalized decision support that can transform biotic stress management from being schedule based to opportunistic and prescriptive.
arXiv Detail & Related papers (2024-09-01T14:55:45Z) - Generating Diverse Agricultural Data for Vision-Based Farming Applications [74.79409721178489]
This model is capable of simulating distinct growth stages of plants, diverse soil conditions, and randomized field arrangements under varying lighting conditions.
Our dataset includes 12,000 images with semantic labels, offering a comprehensive resource for computer vision tasks in precision agriculture.
arXiv Detail & Related papers (2024-03-27T08:42:47Z) - Domain Generalization for Crop Segmentation with Standardized Ensemble Knowledge Distillation [42.39035033967183]
Service robots need a real-time perception system that understands their surroundings and identifies their targets in the wild.
Existing methods, however, often fall short in generalizing to new crops and environmental conditions.
We propose a novel approach to enhance domain generalization using knowledge distillation.
arXiv Detail & Related papers (2023-04-03T14:28:29Z) - A SWAT-based Reinforcement Learning Framework for Crop Management [0.0]
We introduce a reinforcement learning (RL) environment that leverages the dynamics in the Soil and Water Assessment Tool (SWAT)
This drastically saves time and resources that would have been otherwise deployed during a full-growing season.
We demonstrate the utility of our framework by developing and benchmarking various decision-making agents following management strategies informed by standard farming practices and state-of-the-art RL algorithms.
arXiv Detail & Related papers (2023-02-10T00:24:22Z) - Optimizing Crop Management with Reinforcement Learning and Imitation
Learning [9.69704937572711]
We present an intelligent crop management system which optimize the N fertilization and irrigation simultaneously via reinforcement learning (RL), imitation learning (IL), and crop simulations.
We conduct experiments on a case study using maize in Florida and compare trained policies with a maize management guideline in simulations.
Our trained policies under both full and partial observations achieve better outcomes, resulting in a higher profit or a similar profit with a smaller environmental impact.
arXiv Detail & Related papers (2022-09-20T20:48:52Z) - Optimizing Nitrogen Management with Deep Reinforcement Learning and Crop
Simulations [11.576438685465797]
Nitrogen (N) management is critical to sustain soil fertility and crop production while minimizing the negative environmental impact, but is challenging to optimize.
This paper proposes an intelligent N management system using deep reinforcement learning (RL) and crop simulations with Decision Support System for Agrotechnology Transfer (DSSAT)
We then train management policies with deep Q-network and soft actor-critic algorithms, and the Gym-DSSAT interface that allows for daily interactions between the simulated crop environment and RL agents.
arXiv Detail & Related papers (2022-04-21T20:26:41Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - Scenic4RL: Programmatic Modeling and Generation of Reinforcement
Learning Environments [89.04823188871906]
Generation of diverse realistic scenarios is challenging for real-time strategy (RTS) environments.
Most of the existing simulators rely on randomly generating the environments.
We introduce the benefits of adopting an existing formal scenario specification language, SCENIC, to assist researchers.
arXiv Detail & Related papers (2021-06-18T21:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.