Demeter: A Parametric Model of Crop Plant Morphology from the Real World
- URL: http://arxiv.org/abs/2510.16377v1
- Date: Sat, 18 Oct 2025 07:14:40 GMT
- Title: Demeter: A Parametric Model of Crop Plant Morphology from the Real World
- Authors: Tianhang Cheng, Albert J. Zhai, Evan Z. Chen, Rui Zhou, Yawen Deng, Zitong Li, Kejie Zhao, Janice Shiu, Qianyu Zhao, Yide Xu, Xinlei Wang, Yuan Shen, Sheng Wang, Lisa Ainsworth, Kaiyu Guan, Shenlong Wang,
- Abstract summary: We present Demeter, a data-driven parametric model that encodes key factors of a plant morphology.<n>Experiments show that Demeter effectively synthesizes shapes, reconstructs structures, and simulates biophysical processes.
- Score: 34.57672800976057
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning 3D parametric shape models of objects has gained popularity in vision and graphics and has showed broad utility in 3D reconstruction, generation, understanding, and simulation. While powerful models exist for humans and animals, equally expressive approaches for modeling plants are lacking. In this work, we present Demeter, a data-driven parametric model that encodes key factors of a plant morphology, including topology, shape, articulation, and deformation into a compact learned representation. Unlike previous parametric models, Demeter handles varying shape topology across various species and models three sources of shape variation: articulation, subcomponent shape variation, and non-rigid deformation. To advance crop plant modeling, we collected a large-scale, ground-truthed dataset from a soybean farm as a testbed. Experiments show that Demeter effectively synthesizes shapes, reconstructs structures, and simulates biophysical processes. Code and data is available at https://tianhang-cheng.github.io/Demeter/.
Related papers
- Symmetrization of 3D Generative Models [5.431496585727342]
We propose a novel data-centric approach to promote symmetry in 3D generative models by modifying the training data rather than the model architecture.<n>Our method begins with an analysis of reflectional symmetry in both real-world 3D shapes and samples generated by state-of-the-art models.
arXiv Detail & Related papers (2025-12-22T02:05:02Z) - FloraForge: LLM-Assisted Procedural Generation of Editable and Analysis-Ready 3D Plant Geometric Models For Agricultural Applications [13.923496304391044]
We present FloraForge, an LLM-assisted framework that enables domain experts to generate biologically accurate, fully parametric 3D plant models.<n>Our framework leverages LLM-enabled co-design to refine Python scripts that generate parameterized plant as hierarchical B-spline surface representations.<n>We demonstrate the framework on maize, soybean, and mung bean, fitting procedural models to empirical point cloud data.
arXiv Detail & Related papers (2025-12-11T23:28:25Z) - NeuraLeaf: Neural Parametric Leaf Models with Shape and Deformation Disentanglement [27.664230325256067]
We develop a neural parametric model for 3D leaves called NeuraLeaf.<n>NeuraLeaf disentangles the leaves' geometry into their 2D base shapes and 3D deformations.<n>We show that NeuraLeaf successfully generates a wide range of leaf shapes with deformation, resulting in accurate model fitting to 3D observations.
arXiv Detail & Related papers (2025-07-17T01:46:24Z) - Hierarchical Abstraction Enables Human-Like 3D Object Recognition in Deep Learning Models [1.7341654854802664]
Both humans and deep learning models can recognize objects from 3D shapes depicted with sparse visual information.<n>It remains unclear whether these models develop 3D shape representations similar to those used by human vision for object recognition.
arXiv Detail & Related papers (2025-07-13T23:54:45Z) - AWOL: Analysis WithOut synthesis using Language [57.31874938870305]
We leverage language to control existing 3D shape models to produce novel shapes.
We show that we can use text to generate new animals not present during training.
This work also constitutes the first language-driven method for generating 3D trees.
arXiv Detail & Related papers (2024-04-03T20:04:44Z) - GEM3D: GEnerative Medial Abstractions for 3D Shape Synthesis [25.594334301684903]
We introduce GEM3D -- a new deep, topology-aware generative model of 3D shapes.
Key ingredient of our method is a neural skeleton-based representation encoding information on both shape topology and geometry.
We demonstrate significantly more faithful surface reconstruction and diverse shape generation results compared to the state-of-the-art.
arXiv Detail & Related papers (2024-02-26T20:00:57Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.