Modeling Time-Lapse Trajectories to Characterize Cranberry Growth
- URL: http://arxiv.org/abs/2510.08901v1
- Date: Fri, 10 Oct 2025 01:33:19 GMT
- Title: Modeling Time-Lapse Trajectories to Characterize Cranberry Growth
- Authors: Ronan John, Anis Chihoub, Ryan Meegan, Gina Sidelli, Jeffery Neyhart, Peter Oudemans, Kristin Dana,
- Abstract summary: We introduce a method for modeling crop growth based on fine-tuning vision transformers (ViTs) using a self-supervised approach that avoids tedious image annotations.<n>We use a two-fold pretext task (time regression and class prediction) to learn a latent space for the time-lapse evolution of plant and fruit appearance.<n>The resulting 2D temporal tracks provide an interpretable time-series model of crop growth that can be used to: 1) predict growth over time and 2) distinguish temporal differences of cranberry varieties.
- Score: 0.14658400971135646
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Change monitoring is an essential task for cranberry farming as it provides both breeders and growers with the ability to analyze growth, predict yield, and make treatment decisions. However, this task is often done manually, requiring significant time on the part of a cranberry grower or breeder. Deep learning based change monitoring holds promise, despite the caveat of hard-to-interpret high dimensional features and hand-annotations for fine-tuning. To address this gap, we introduce a method for modeling crop growth based on fine-tuning vision transformers (ViTs) using a self-supervised approach that avoids tedious image annotations. We use a two-fold pretext task (time regression and class prediction) to learn a latent space for the time-lapse evolution of plant and fruit appearance. The resulting 2D temporal tracks provide an interpretable time-series model of crop growth that can be used to: 1) predict growth over time and 2) distinguish temporal differences of cranberry varieties. We also provide a novel time-lapse dataset of cranberry fruit featuring eight distinct varieties, observed 52 times over the growing season (span of around four months), annotated with information about fungicide application, yield, and rot. Our approach is general and can be applied to other crops and applications (code and dataset can be found at https://github. com/ronan-39/tlt/).
Related papers
- ViewSparsifier: Killing Redundancy in Multi-View Plant Phenotyping [8.348234911002821]
Plant phenotyping involves analyzing observable characteristics of plants to better understand their growth, health, and development.<n>In the context of deep learning, this analysis is often approached through single-view classification or regression models.<n>To address this, the Growth Modelling (GroMo) Grand Challenge at ACM Multimedia 2025 provides a multi-view dataset featuring multiple plants.
arXiv Detail & Related papers (2025-09-10T12:53:38Z) - Agtech Framework for Cranberry-Ripening Analysis Using Vision Foundation Models [1.5728609542259502]
We develop a framework for characterizing the ripening process of cranberry crops using aerial and ground imaging.<n>This work is the first of its kind and has future impact for cranberries and for other crops including wine grapes, olives, blueberries, and maize.
arXiv Detail & Related papers (2024-12-12T22:03:33Z) - Soybean Maturity Prediction using 2D Contour Plots from Drone based Time Series Imagery [5.604868960444558]
Plant breeding programs require assessments of days to maturity for accurate selection and placement of entries in appropriate tests.<n>Traditionally, the estimation of maturity value for breeding varieties has involved breeders manually inspecting fields and assessing maturity value visually.<n>This study developed a machine-learning model for evaluating soybean maturity using UAV-based time-series imagery.
arXiv Detail & Related papers (2024-12-12T19:23:50Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a causal Transformer for unified time series forecasting.<n>Based on large-scale pre-training, Timer-XL achieves state-of-the-art zero-shot performance.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Optimizing Resource Consumption in Diffusion Models through Hallucination Early Detection [87.22082662250999]
We introduce HEaD (Hallucination Early Detection), a new paradigm designed to swiftly detect incorrect generations at the beginning of the diffusion process.
We demonstrate that using HEaD saves computational resources and accelerates the generation process to get a complete image.
Our findings reveal that HEaD can save up to 12% of the generation time on a two objects scenario.
arXiv Detail & Related papers (2024-09-16T18:00:00Z) - ReAugment: Model Zoo-Guided RL for Few-Shot Time Series Augmentation and Forecasting [74.00765474305288]
We present a pilot study on using reinforcement learning (RL) for time series data augmentation.<n>Our method, ReAugment, tackles three critical questions: which parts of the training set should be augmented, how the augmentation should be performed, and what advantages RL brings to the process.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Performative Time-Series Forecasting [64.03865043422597]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.<n>We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.<n>We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Vision-Based Cranberry Crop Ripening Assessment [1.8434042562191815]
This work is the first of its kind in quantitative evaluation of ripening using computer vision methods.
It has impact beyond cranberry crops including wine grapes, olives, blueberries, and maize.
arXiv Detail & Related papers (2023-08-31T14:58:11Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Temporal Prediction and Evaluation of Brassica Growth in the Field using
Conditional Generative Adversarial Networks [1.2926587870771542]
The prediction of plant growth is a major challenge, as it is affected by numerous and highly variable environmental factors.
This paper proposes a novel monitoring approach that comprises high- throughput imaging sensor measurements and their automatic analysis.
Our approach's core is a novel machine learning-based growth model based on conditional generative adversarial networks.
arXiv Detail & Related papers (2021-05-17T13:00:01Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.