Machine Learning aided Crop Yield Optimization
- URL: http://arxiv.org/abs/2111.00963v1
- Date: Mon, 1 Nov 2021 14:14:11 GMT
- Title: Machine Learning aided Crop Yield Optimization
- Authors: Chace Ashcraft, Kiran Karra
- Abstract summary: We present a crop simulation environment with an OpenAI Gym interface, and apply modern deep reinforcement learning (DRL) algorithms to optimize yield.
We empirically show that DRL algorithms may be useful in discovering new policies and approaches to help optimize crop yield, while simultaneously minimizing constraining factors such as water and fertilizer usage.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a crop simulation environment with an OpenAI Gym interface, and
apply modern deep reinforcement learning (DRL) algorithms to optimize yield. We
empirically show that DRL algorithms may be useful in discovering new policies
and approaches to help optimize crop yield, while simultaneously minimizing
constraining factors such as water and fertilizer usage. We propose that this
hybrid plant modeling and data-driven approach for discovering new strategies
to optimize crop yield may help address upcoming global food demands due to
population expansion and climate change.
Related papers
- LSTM Autoencoder-based Deep Neural Networks for Barley Genotype-to-Phenotype Prediction [16.99449054451577]
We propose a new LSTM autoencoder-based model for barley genotype-to-phenotype prediction, specifically for flowering time and grain yield estimation.
Our model outperformed the other baseline methods, demonstrating its potential in handling complex high-dimensional agricultural datasets.
arXiv Detail & Related papers (2024-07-21T16:07:43Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Breeding Programs Optimization with Reinforcement Learning [41.75807702905971]
This paper introduces the use of Reinforcement Learning (RL) to optimize simulated crop breeding programs.
To benchmark RL-based breeding algorithms, we introduce a suite of Gym environments.
The study demonstrates the superiority of RL techniques over standard practices in terms of genetic gain when simulated in silico.
arXiv Detail & Related papers (2024-06-06T10:17:51Z) - Data-Efficient Task Generalization via Probabilistic Model-based Meta
Reinforcement Learning [58.575939354953526]
PACOH-RL is a novel model-based Meta-Reinforcement Learning (Meta-RL) algorithm designed to efficiently adapt control policies to changing dynamics.
Existing Meta-RL methods require abundant meta-learning data, limiting their applicability in settings such as robotics.
Our experiment results demonstrate that PACOH-RL outperforms model-based RL and model-based Meta-RL baselines in adapting to new dynamic conditions.
arXiv Detail & Related papers (2023-11-13T18:51:57Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - A reinforcement learning strategy for p-adaptation in high order solvers [0.0]
Reinforcement learning (RL) has emerged as a promising approach to automating decision processes.
This paper explores the application of RL techniques to optimise the order in the computational mesh when using high-order solvers.
arXiv Detail & Related papers (2023-06-14T07:01:31Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - Risk-averse Stochastic Optimization for Farm Management Practices and
Cultivar Selection Under Uncertainty [8.427937898153779]
We develop optimization frameworks under uncertainty using conditional value-at-risk in the objective programming function.
As a case study, we set up the crop model for 25 locations across the US Corn Belt.
Results indicated that the proposed model produced meaningful connections between weather and optima decisions.
arXiv Detail & Related papers (2022-07-17T01:14:43Z) - Towards Deployment-Efficient Reinforcement Learning: Lower Bound and
Optimality [141.89413461337324]
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL)
We propose a theoretical formulation for deployment-efficient RL (DE-RL) from an "optimization with constraints" perspective.
arXiv Detail & Related papers (2022-02-14T01:31:46Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.