Towards an efficient and risk aware strategy for guiding farmers in
identifying best crop management
- URL: http://arxiv.org/abs/2210.04537v1
- Date: Mon, 10 Oct 2022 10:11:10 GMT
- Title: Towards an efficient and risk aware strategy for guiding farmers in
identifying best crop management
- Authors: Romain Gautron (Cirad, CIAT), Dorian Baudry (CNRS), Myriam Adam (UMR
AGAP, Cirad), Gatien N Falconnier (Cirad, CIMMYT), Marc Corbeels (Cirad,
IITA)
- Abstract summary: An ''intuitive strategy'' would be to set multi-year field trials with equal proportion of each practice to test.
Our objective was to provide an identification strategy using a bandit algorithm that was better at minimizing farmers' losses occurring during the identification.
This study is a methodological step which opens up new horizons for risk-aware ensemble identification of the performance of contrasting crop management practices in real conditions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identification of best performing fertilizer practices among a set of
contrasting practices with field trials is challenging as crop losses are
costly for farmers. To identify best management practices, an ''intuitive
strategy'' would be to set multi-year field trials with equal proportion of
each practice to test. Our objective was to provide an identification strategy
using a bandit algorithm that was better at minimizing farmers' losses
occurring during the identification, compared with the ''intuitive strategy''.
We used a modification of the Decision Support Systems for Agro-Technological
Transfer (DSSAT) crop model to mimic field trial responses, with a case-study
in Southern Mali. We compared fertilizer practices using a risk-aware measure,
the Conditional Value-at-Risk (CVaR), and a novel agronomic metric, the Yield
Excess (YE). YE accounts for both grain yield and agronomic nitrogen use
efficiency. The bandit-algorithm performed better than the intuitive strategy:
it increased, in most cases, farmers' protection against worst outcomes. This
study is a methodological step which opens up new horizons for risk-aware
ensemble identification of the performance of contrasting crop management
practices in real conditions.
Related papers
- A Comparative Study of Deep Reinforcement Learning for Crop Production Management [13.123171643387668]
Reinforcement learning (RL) has emerged as a promising tool for developing adaptive crop management policies.
In the gym-DSSAT crop model environment, one of the most widely used simulators for crop management, proximal policy optimization (PPO) and deep Q-networks (DQN) have shown promising results.
In this study, we evaluated PPO and DQN against static baseline policies across three different RL tasks, fertilization, irrigation, and mixed management, provided by the gym-DSSAT environment.
arXiv Detail & Related papers (2024-11-06T18:35:51Z) - AgGym: An agricultural biotic stress simulation environment for ultra-precision management planning [8.205412609306713]
We present AgGym, a modular, crop and stress simulation framework to model the spread of biotic stresses in a field.
We show that AgGym can be customized with limited data to simulate yield outcomes under various biotic stress conditions.
Our proposed framework enables personalized decision support that can transform biotic stress management from being schedule based to opportunistic and prescriptive.
arXiv Detail & Related papers (2024-09-01T14:55:45Z) - Learning-based agricultural management in partially observable
environments subject to climate variability [5.5062239803516615]
Agricultural management holds a central role in shaping crop yield, economic profitability, and environmental sustainability.
We introduce an innovative framework that integrates Deep Reinforcement Learning (DRL) with Recurrent Neural Networks (RNNs)
Our study illuminates the need for agent retraining to acquire new optimal policies under extreme weather events.
arXiv Detail & Related papers (2024-01-02T16:18:53Z) - Mimicking Better by Matching the Approximate Action Distribution [48.95048003354255]
We introduce MAAD, a novel, sample-efficient on-policy algorithm for Imitation Learning from Observations.
We show that it requires considerable fewer interactions to achieve expert performance, outperforming current state-of-the-art on-policy methods.
arXiv Detail & Related papers (2023-06-16T12:43:47Z) - Improved Policy Evaluation for Randomized Trials of Algorithmic Resource
Allocation [54.72195809248172]
We present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT.
We prove theoretically that such an estimator is more accurate than common estimators based on sample means.
arXiv Detail & Related papers (2023-02-06T05:17:22Z) - Evaluating COVID-19 vaccine allocation policies using Bayesian $m$-top
exploration [53.122045119395594]
We present a novel technique for evaluating vaccine allocation strategies using a multi-armed bandit framework.
$m$-top exploration allows the algorithm to learn $m$ policies for which it expects the highest utility.
We consider the Belgian COVID-19 epidemic using the individual-based model STRIDE, where we learn a set of vaccination policies.
arXiv Detail & Related papers (2023-01-30T12:22:30Z) - Evaluating Digital Agriculture Recommendations with Causal Inference [0.9213852038999553]
We propose an observational causal inference framework for the empirical evaluation of the impact of digital tools on target farm performance indicators.
As a case study, we designed and implemented a recommendation system for the optimal sowing time of cotton based on numerical weather predictions.
Using the back-door criterion, we identify the impact of sowing recommendations on the yield and subsequently estimate it using linear regression, matching, inverse propensity score weighting and meta-learners.
arXiv Detail & Related papers (2022-11-30T12:20:08Z) - Risk-averse Stochastic Optimization for Farm Management Practices and
Cultivar Selection Under Uncertainty [8.427937898153779]
We develop optimization frameworks under uncertainty using conditional value-at-risk in the objective programming function.
As a case study, we set up the crop model for 25 locations across the US Corn Belt.
Results indicated that the proposed model produced meaningful connections between weather and optima decisions.
arXiv Detail & Related papers (2022-07-17T01:14:43Z) - Principal-Agent Hypothesis Testing [54.154244569974864]
We consider the relationship between a regulator (the principal) and an experimenter (the agent) such as a pharmaceutical company.
The efficacy of the drug is not known to the regulator, so the pharmaceutical company must run a costly trial to prove efficacy to the regulator.
We show how to design protocols that are robust to an agent's strategic actions, and derive the optimal protocol in the presence of strategic entrants.
arXiv Detail & Related papers (2022-05-13T17:59:23Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.