Multi-Task Learning for Budbreak Prediction
- URL: http://arxiv.org/abs/2301.01815v1
- Date: Wed, 4 Jan 2023 20:28:17 GMT
- Title: Multi-Task Learning for Budbreak Prediction
- Authors: Aseem Saxena, Paola Pesantez-Cabrera, Rohan Ballapragada, Markus
Keller, Alan Fern
- Abstract summary: This work investigates deep learning for budbreak prediction using data collected for multiple grape cultivars.
To address this issue, we investigate multi-task learning, which combines data across all cultivars to make predictions for individual cultivars.
Our main result shows that several variants of multi-task learning are all able to significantly improve prediction accuracy compared to learning for each cultivar independently.
- Score: 18.329763523260624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Grapevine budbreak is a key phenological stage of seasonal development, which
serves as a signal for the onset of active growth. This is also when grape
plants are most vulnerable to damage from freezing temperatures. Hence, it is
important for winegrowers to anticipate the day of budbreak occurrence to
protect their vineyards from late spring frost events. This work investigates
deep learning for budbreak prediction using data collected for multiple grape
cultivars. While some cultivars have over 30 seasons of data others have as
little as 4 seasons, which can adversely impact prediction accuracy. To address
this issue, we investigate multi-task learning, which combines data across all
cultivars to make predictions for individual cultivars. Our main result shows
that several variants of multi-task learning are all able to significantly
improve prediction accuracy compared to learning for each cultivar
independently.
Related papers
- Transfer Learning via Auxiliary Labels with Application to Cold-Hardiness Prediction [13.08917874547845]
Cold temperatures can cause significant frost damage to fruit crops depending on their resilience, or cold hardiness, which changes throughout the season.
This has led to the development of predictive cold-hardiness models, which help farmers decide when to deploy expensive frost-mitigation measures.
Unfortunately, cold-hardiness data for model training is only available for some fruit cultivars due to the need for specialized equipment and expertise.
In this work, we introduce a new transfer-learning framework, Transfer via Auxiliary Labels (TAL), that allows farmers to leverage the phenological data to produce more accurate cold-hardiness predictions.
arXiv Detail & Related papers (2025-04-17T17:51:38Z) - Cherry Yield Forecast: Harvest Prediction for Individual Sweet Cherry Trees [0.0]
This paper is part of a publication series from the For5G project that has the goal of creating digital twins of sweet cherry trees.
It is concluded that accurate yield prediction for sweet cherry trees is possible when objects are manually counted and that automated features extraction with similar accuracy remains an open problem yet to be solved.
arXiv Detail & Related papers (2025-03-26T10:50:02Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Winter Wheat Crop Yield Prediction on Multiple Heterogeneous Datasets
using Machine Learning [0.2580765958706853]
Winter wheat is one of the most important crops in the United Kingdom, and crop yield prediction is essential for the nation's food security.
Several studies have employed machine learning (ML) techniques to predict crop yield on a county or farm-based level.
The main objective of this study is to predict winter wheat crop yield using ML models on multiple heterogeneous datasets.
arXiv Detail & Related papers (2023-06-20T23:52:39Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Fruit Ripeness Classification: a Survey [59.11160990637616]
Many automatic methods have been proposed that employ a variety of feature descriptors for the food item to be graded.
Machine learning and deep learning techniques dominate the top-performing methods.
Deep learning can operate on raw data and thus relieve the users from having to compute complex engineered features.
arXiv Detail & Related papers (2022-12-29T19:32:20Z) - Grape Cold Hardiness Prediction via Multi-Task Learning [18.979780350924635]
Cold temperatures during fall and spring have the potential to cause frost damage to grapevines and other fruit plants.
Farmers deploy expensive frost mitigation measures, such as sprinklers, heaters, and wind machines, when they judge that damage may occur.
Scientists have developed cold hardiness prediction models that can be tuned to different grape cultivars based on laborious field measurement data.
arXiv Detail & Related papers (2022-09-21T18:18:52Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Comparison of Machine Learning Methods for Predicting Winter Wheat Yield
in Germany [0.0]
This study analyzed the performance of different machine learning methods for winter wheat yield prediction.
To address the seasonality, weekly features were used that explicitly take soil moisture conditions and meteorological events into account.
arXiv Detail & Related papers (2021-05-04T04:40:53Z) - Seed Stocking Via Multi-Task Learning [4.198742468051408]
Sellers of crop seeds need to plan for the variety and quantity of seeds to stock at least a year in advance.
Given the unpredictability of weather, farmers need to make decisions that balance high yield and low risk.
A seed vendor needs to be able to anticipate the needs of farmers and have them ready.
arXiv Detail & Related papers (2021-01-12T07:26:38Z) - Abiotic Stress Prediction from RGB-T Images of Banana Plantlets [15.073709640728241]
We present several methods and strategies for abiotic stress prediction in banana plantlets.
The dataset consists of RGB and thermal images, taken once daily of each plant.
arXiv Detail & Related papers (2020-11-23T18:15:33Z) - Predicting MOOCs Dropout Using Only Two Easily Obtainable Features from
the First Week's Activities [56.1344233010643]
Several features are considered to contribute towards learner attrition or lack of interest, which may lead to disengagement or total dropout.
This study aims to predict dropout early-on, from the first week, by comparing several machine-learning approaches.
arXiv Detail & Related papers (2020-08-12T10:44:49Z) - Fine-Tuning Pretrained Language Models: Weight Initializations, Data
Orders, and Early Stopping [62.78338049381917]
Fine-tuning pretrained contextual word embedding models to supervised downstream tasks has become commonplace in natural language processing.
We experiment with four datasets from the GLUE benchmark, fine-tuning BERT hundreds of times on each while varying only the random seeds.
We find substantial performance increases compared to previously reported results, and we quantify how the performance of the best-found model varies as a function of the number of fine-tuning trials.
arXiv Detail & Related papers (2020-02-15T02:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.