ViewSparsifier: Killing Redundancy in Multi-View Plant Phenotyping
- URL: http://arxiv.org/abs/2509.08550v1
- Date: Wed, 10 Sep 2025 12:53:38 GMT
- Title: ViewSparsifier: Killing Redundancy in Multi-View Plant Phenotyping
- Authors: Robin-Nico Kampa, Fabian Deuser, Konrad Habel, Norbert Oswald,
- Abstract summary: Plant phenotyping involves analyzing observable characteristics of plants to better understand their growth, health, and development.<n>In the context of deep learning, this analysis is often approached through single-view classification or regression models.<n>To address this, the Growth Modelling (GroMo) Grand Challenge at ACM Multimedia 2025 provides a multi-view dataset featuring multiple plants.
- Score: 8.348234911002821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plant phenotyping involves analyzing observable characteristics of plants to better understand their growth, health, and development. In the context of deep learning, this analysis is often approached through single-view classification or regression models. However, these methods often fail to capture all information required for accurate estimation of target phenotypic traits, which can adversely affect plant health assessment and harvest readiness prediction. To address this, the Growth Modelling (GroMo) Grand Challenge at ACM Multimedia 2025 provides a multi-view dataset featuring multiple plants and two tasks: Plant Age Prediction and Leaf Count Estimation. Each plant is photographed from multiple heights and angles, leading to significant overlap and redundancy in the captured information. To learn view-invariant embeddings, we incorporate 24 views, referred to as the selection vector, in a random selection. Our ViewSparsifier approach won both tasks. For further improvement and as a direction for future research, we also experimented with randomized view selection across all five height levels (120 views total), referred to as selection matrices.
Related papers
- Modeling Time-Lapse Trajectories to Characterize Cranberry Growth [0.14658400971135646]
We introduce a method for modeling crop growth based on fine-tuning vision transformers (ViTs) using a self-supervised approach that avoids tedious image annotations.<n>We use a two-fold pretext task (time regression and class prediction) to learn a latent space for the time-lapse evolution of plant and fruit appearance.<n>The resulting 2D temporal tracks provide an interpretable time-series model of crop growth that can be used to: 1) predict growth over time and 2) distinguish temporal differences of cranberry varieties.
arXiv Detail & Related papers (2025-10-10T01:33:19Z) - Overview of PlantCLEF 2024: multi-species plant identification in vegetation plot images [2.7110107174608173]
The PlantCLEF 2024 challenge leverages a new test set of thousands of multi-label images annotated by experts and covering over 800 species.<n>It provides a large training set of 1.7 million individual plant images as well as state-of-the-art vision transformer models pre-trained on this data.<n>The aim is to predict all the plant species present on a high-resolution plot image.
arXiv Detail & Related papers (2025-09-19T08:51:41Z) - GroMo: Plant Growth Modeling with Multiview Images [3.7287379829068805]
We present the Growth Modelling (GroMo) challenge, which is designed for two primary tasks: plant age prediction and leaf count estimation.<n>The GroMo Challenge aims to advance plant phenotyping research by encouraging innovative solutions for tracking and predicting plant growth.
arXiv Detail & Related papers (2025-03-09T13:23:16Z) - Agtech Framework for Cranberry-Ripening Analysis Using Vision Foundation Models [1.5728609542259502]
We develop a framework for characterizing the ripening process of cranberry crops using aerial and ground imaging.<n>This work is the first of its kind and has future impact for cranberries and for other crops including wine grapes, olives, blueberries, and maize.
arXiv Detail & Related papers (2024-12-12T22:03:33Z) - What Matters When Repurposing Diffusion Models for General Dense Perception Tasks? [49.84679952948808]
Recent works show promising results by simply fine-tuning T2I diffusion models for dense perception tasks.<n>We conduct a thorough investigation into critical factors that affect transfer efficiency and performance when using diffusion priors.<n>Our work culminates in the development of GenPercept, an effective deterministic one-step fine-tuning paradigm tailed for dense visual perception tasks.
arXiv Detail & Related papers (2024-03-10T04:23:24Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Adaptive Transfer Learning for Plant Phenotyping [33.28898554551106]
We study the knowledge transferability of modern machine learning models in plant phenotyping.
How is the performance of conventional machine learning models affected by the number of annotated samples for plant phenotyping?
Could the neural network based transfer learning models improve the performance of plant phenotyping?
arXiv Detail & Related papers (2022-01-14T00:40:40Z) - Temporal Prediction and Evaluation of Brassica Growth in the Field using
Conditional Generative Adversarial Networks [1.2926587870771542]
The prediction of plant growth is a major challenge, as it is affected by numerous and highly variable environmental factors.
This paper proposes a novel monitoring approach that comprises high- throughput imaging sensor measurements and their automatic analysis.
Our approach's core is a novel machine learning-based growth model based on conditional generative adversarial networks.
arXiv Detail & Related papers (2021-05-17T13:00:01Z) - Improving Faithfulness in Abstractive Summarization with Contrast
Candidate Generation and Selection [54.38512834521367]
We study contrast candidate generation and selection as a model-agnostic post-processing technique.
We learn a discriminative correction model by generating alternative candidate summaries.
This model is then used to select the best candidate as the final output summary.
arXiv Detail & Related papers (2021-04-19T05:39:24Z) - MGD-GAN: Text-to-Pedestrian generation through Multi-Grained
Discrimination [96.91091607251526]
We propose the Multi-Grained Discrimination enhanced Generative Adversarial Network, that capitalizes a human-part-based Discriminator and a self-cross-attended Discriminator.
A fine-grained word-level attention mechanism is employed in the HPD module to enforce diversified appearance and vivid details.
The substantial improvement over the various metrics demonstrates the efficacy of MGD-GAN on the text-to-pedestrian synthesis scenario.
arXiv Detail & Related papers (2020-10-02T12:24:48Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.