Breeding Programs Optimization with Reinforcement Learning
- URL: http://arxiv.org/abs/2406.03932v1
- Date: Thu, 6 Jun 2024 10:17:51 GMT
- Title: Breeding Programs Optimization with Reinforcement Learning
- Authors: Omar G. Younis, Luca Corinzia, Ioannis N. Athanasiadis, Andreas Krause, Joachim M. Buhmann, Matteo Turchetta,
- Abstract summary: This paper introduces the use of Reinforcement Learning (RL) to optimize simulated crop breeding programs.
To benchmark RL-based breeding algorithms, we introduce a suite of Gym environments.
The study demonstrates the superiority of RL techniques over standard practices in terms of genetic gain when simulated in silico.
- Score: 41.75807702905971
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Crop breeding is crucial in improving agricultural productivity while potentially decreasing land usage, greenhouse gas emissions, and water consumption. However, breeding programs are challenging due to long turnover times, high-dimensional decision spaces, long-term objectives, and the need to adapt to rapid climate change. This paper introduces the use of Reinforcement Learning (RL) to optimize simulated crop breeding programs. RL agents are trained to make optimal crop selection and cross-breeding decisions based on genetic information. To benchmark RL-based breeding algorithms, we introduce a suite of Gym environments. The study demonstrates the superiority of RL techniques over standard practices in terms of genetic gain when simulated in silico using real-world genomic maize data.
Related papers
- A Comparative Study of Deep Reinforcement Learning for Crop Production Management [13.123171643387668]
Reinforcement learning (RL) has emerged as a promising tool for developing adaptive crop management policies.
In the gym-DSSAT crop model environment, one of the most widely used simulators for crop management, proximal policy optimization (PPO) and deep Q-networks (DQN) have shown promising results.
In this study, we evaluated PPO and DQN against static baseline policies across three different RL tasks, fertilization, irrigation, and mixed management, provided by the gym-DSSAT environment.
arXiv Detail & Related papers (2024-11-06T18:35:51Z) - LSTM Autoencoder-based Deep Neural Networks for Barley Genotype-to-Phenotype Prediction [16.99449054451577]
We propose a new LSTM autoencoder-based model for barley genotype-to-phenotype prediction, specifically for flowering time and grain yield estimation.
Our model outperformed the other baseline methods, demonstrating its potential in handling complex high-dimensional agricultural datasets.
arXiv Detail & Related papers (2024-07-21T16:07:43Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Machine Learning-based Nutrient Application's Timeline Recommendation
for Smart Agriculture: A Large-Scale Data Mining Approach [0.0]
Inaccurate fertiliser application decisions can lead to costly consequences, hinder food production, and cause environmental harm.
We propose a solution to predict nutrient application by determining required fertiliser quantities for an entire season.
The proposed solution recommends adjusting fertiliser amounts based on weather conditions and soil characteristics to promote cost-effective and environmentally friendly agriculture.
arXiv Detail & Related papers (2023-10-18T15:37:19Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Optimizing Nitrogen Management with Deep Reinforcement Learning and Crop
Simulations [11.576438685465797]
Nitrogen (N) management is critical to sustain soil fertility and crop production while minimizing the negative environmental impact, but is challenging to optimize.
This paper proposes an intelligent N management system using deep reinforcement learning (RL) and crop simulations with Decision Support System for Agrotechnology Transfer (DSSAT)
We then train management policies with deep Q-network and soft actor-critic algorithms, and the Gym-DSSAT interface that allows for daily interactions between the simulated crop environment and RL agents.
arXiv Detail & Related papers (2022-04-21T20:26:41Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Machine Learning aided Crop Yield Optimization [0.0]
We present a crop simulation environment with an OpenAI Gym interface, and apply modern deep reinforcement learning (DRL) algorithms to optimize yield.
We empirically show that DRL algorithms may be useful in discovering new policies and approaches to help optimize crop yield, while simultaneously minimizing constraining factors such as water and fertilizer usage.
arXiv Detail & Related papers (2021-11-01T14:14:11Z) - Towards Standardizing Reinforcement Learning Approaches for Stochastic
Production Scheduling [77.34726150561087]
reinforcement learning can be used to solve scheduling problems.
Existing studies rely on (sometimes) complex simulations for which the code is unavailable.
There is a vast array of RL designs to choose from.
standardization of model descriptions - both production setup and RL design - and validation scheme are a prerequisite.
arXiv Detail & Related papers (2021-04-16T16:07:10Z) - Learning from Data to Optimize Control in Precision Farming [77.34726150561087]
Special issue presents the latest development in statistical inference, machine learning and optimum control for precision farming.
Satellite positioning and navigation followed by Internet-of-Things generate vast information that can be used to optimize farming processes in real-time.
arXiv Detail & Related papers (2020-07-07T12:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.