Accessing the Effect of Phyllotaxy and Planting Density on Light Use Efficiency in Field-Grown Maize using 3D Reconstructions
- URL: http://arxiv.org/abs/2503.06887v1
- Date: Mon, 10 Mar 2025 03:32:44 GMT
- Title: Accessing the Effect of Phyllotaxy and Planting Density on Light Use Efficiency in Field-Grown Maize using 3D Reconstructions
- Authors: Nasla Saleem, Talukder Zaki Jubery, Aditya Balu, Yan Zhou, Yawei Li, Patrick S. Schnable, Adarsh Krishnamurthy, Baskar Ganapathysubramanian,
- Abstract summary: This study integrates realistic 3D reconstructions of field-grown maize with photosynthetically active radiation (PAR) modeling.<n>Using this framework, we present detailed analyses of the impact of canopy orientations, plant and row spacings, and planting row directions on PAR interception.
- Score: 16.27322651520103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-density planting is a widely adopted strategy to enhance maize productivity, yet it introduces challenges such as increased interplant competition and shading, which can limit light capture and overall yield potential. In response, some maize plants naturally reorient their canopies to optimize light capture, a process known as canopy reorientation. Understanding this adaptive response and its impact on light capture is crucial for maximizing agricultural yield potential. This study introduces an end-to-end framework that integrates realistic 3D reconstructions of field-grown maize with photosynthetically active radiation (PAR) modeling to assess the effects of phyllotaxy and planting density on light interception. In particular, using 3D point clouds derived from field data, virtual fields for a diverse set of maize genotypes were constructed and validated against field PAR measurements. Using this framework, we present detailed analyses of the impact of canopy orientations, plant and row spacings, and planting row directions on PAR interception throughout a typical growing season. Our findings highlight significant variations in light interception efficiency across different planting densities and canopy orientations. By elucidating the relationship between canopy architecture and light capture, this study offers valuable guidance for optimizing maize breeding and cultivation strategies across diverse agricultural settings.
Related papers
- Machine Learning Models for Soil Parameter Prediction Based on Satellite, Weather, Clay and Yield Data [1.546169961420396]
The AgroLens project endeavors to develop Machine Learning-based methodologies to predict soil nutrient levels without reliance on laboratory tests.
The approach begins with the development of a robust European model using the LUCAS Soil dataset and Sentinel-2 satellite imagery.
Advanced algorithms, including Random Forests, Extreme Gradient Boosting (XGBoost), and Fully Connected Neural Networks (FCNN), were implemented and finetuned for precise nutrient prediction.
arXiv Detail & Related papers (2025-03-28T09:44:32Z) - AgriField3D: A Curated 3D Point Cloud and Procedural Model Dataset of Field-Grown Maize from a Diversity Panel [12.89812013060155]
AgriField3D is a curated dataset of 3D point clouds of field-grown maize plants from a diverse genetic panel.
Our dataset comprises over 1,000 high-quality point clouds collected using a Terrestrial Laser Scanner.
arXiv Detail & Related papers (2025-03-10T19:53:20Z) - Procedural Generation of 3D Maize Plant Architecture from LIDAR Data [16.458252508124794]
This study introduces a robust framework for generating procedural 3D models of maize (Zea mays) plants from LiDAR point cloud data.<n>Our framework leverages Non-Uniform Rational B-Spline (NURBS) surfaces to model the leaves of maize plants.
arXiv Detail & Related papers (2025-01-21T22:53:09Z) - Generating Diverse Agricultural Data for Vision-Based Farming Applications [74.79409721178489]
This model is capable of simulating distinct growth stages of plants, diverse soil conditions, and randomized field arrangements under varying lighting conditions.
Our dataset includes 12,000 images with semantic labels, offering a comprehensive resource for computer vision tasks in precision agriculture.
arXiv Detail & Related papers (2024-03-27T08:42:47Z) - AlignMiF: Geometry-Aligned Multimodal Implicit Field for LiDAR-Camera
Joint Synthesis [98.3959800235485]
Recently, there exist some methods exploring multiple modalities within a single field, aiming to share implicit features from different modalities to enhance reconstruction performance.
In this work, we conduct comprehensive analyses on the multimodal implicit field of LiDAR-camera joint synthesis, revealing the underlying issue lies in the misalignment of different sensors.
We introduce AlignMiF, a geometrically aligned multimodal implicit field with two proposed modules: Geometry-Aware Alignment (GAA) and Shared Geometry Initialization (SGI)
arXiv Detail & Related papers (2024-02-27T13:08:47Z) - BonnBeetClouds3D: A Dataset Towards Point Cloud-based Organ-level
Phenotyping of Sugar Beet Plants under Field Conditions [30.27773980916216]
Agricultural production is facing severe challenges in the next decades induced by climate change and the need for sustainability.
Advancements in field management through non-chemical weeding by robots in combination with monitoring of crops by autonomous unmanned aerial vehicles (UAVs) are helpful to address these challenges.
The analysis of plant traits, called phenotyping, is an essential activity in plant breeding, it however involves a great amount of manual labor.
arXiv Detail & Related papers (2023-12-22T14:06:44Z) - Precision Agriculture: Crop Mapping using Machine Learning and Sentinel-2 Satellite Imagery [5.914742040076052]
This study employs deep learning and pixel-based machine learning methods to accurately segment lavender fields for precision agriculture.
Our fine-tuned final model, a U-Net architecture, can achieve a Dice coefficient of 0.8324.
arXiv Detail & Related papers (2023-11-25T20:26:11Z) - High-fidelity 3D Reconstruction of Plants using Neural Radiance Field [10.245620447865456]
We present a novel plant dataset comprising real plant images from production environments.
This dataset is a first-of-its-kind initiative aimed at comprehensively exploring the advantages and limitations of NeRF in agricultural contexts.
arXiv Detail & Related papers (2023-11-07T17:31:27Z) - Generating high-quality 3DMPCs by adaptive data acquisition and
NeREF-based radiometric calibration with UGV plant phenotyping system [3.7387019397567793]
This study proposed a novel approach for adaptive data acquisition and radiometric calibration to generate high-quality 3DMPCs of plants.
The integrity of the whole-plant data was improved by an average of 23.6% compared to the fixed viewpoints alone.
The 3D-calibrated plant 3DMPCs improved the predictive accuracy of PLSR for chlorophyll content, with an average increase of 0.07 in R2 and an average decrease of 21.25% in RMSE.
arXiv Detail & Related papers (2023-05-11T12:59:21Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.