Gaussian map predictions for 3D surface feature localisation and
counting
- URL: http://arxiv.org/abs/2112.03736v1
- Date: Tue, 7 Dec 2021 14:43:14 GMT
- Title: Gaussian map predictions for 3D surface feature localisation and
counting
- Authors: Justin Le Lou\"edec and Grzegorz Cielniak
- Abstract summary: We propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features.
We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation.
We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications.
- Score: 5.634825161148484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose to employ a Gaussian map representation to estimate
precise location and count of 3D surface features, addressing the limitations
of state-of-the-art methods based on density estimation which struggle in
presence of local disturbances. Gaussian maps indicate probable object location
and can be generated directly from keypoint annotations avoiding laborious and
costly per-pixel annotations. We apply this method to the 3D spheroidal class
of objects which can be projected into 2D shape representation enabling
efficient processing by a neural network GNet, an improved UNet architecture,
which generates the likely locations of surface features and their precise
count. We demonstrate a practical use of this technique for counting strawberry
achenes which is used as a fruit quality measure in phenotyping applications.
The results of training the proposed system on several hundreds of 3D scans of
strawberries from a publicly available dataset demonstrate the accuracy and
precision of the system which outperforms the state-of-the-art density-based
methods for this application.
Related papers
- GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction [70.65250036489128]
3D semantic occupancy prediction aims to obtain 3D fine-grained geometry and semantics of the surrounding scene.
We propose an object-centric representation to describe 3D scenes with sparse 3D semantic Gaussians.
GaussianFormer achieves comparable performance with state-of-the-art methods with only 17.8% - 24.8% of their memory consumption.
arXiv Detail & Related papers (2024-05-27T17:59:51Z) - 3DGS-ReLoc: 3D Gaussian Splatting for Map Representation and Visual ReLocalization [13.868258945395326]
This paper presents a novel system designed for 3D mapping and visual relocalization using 3D Gaussian Splatting.
Our proposed method uses LiDAR and camera data to create accurate and visually plausible representations of the environment.
arXiv Detail & Related papers (2024-03-17T23:06:12Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - Improved Counting and Localization from Density Maps for Object
Detection in 2D and 3D Microscopy Imaging [4.746727774540763]
We propose an alternative method to count and localize objects from the density map.
Our results show improved performance in counting and localization of objects in 2D and 3D microscopy data.
arXiv Detail & Related papers (2022-03-29T15:54:19Z) - Soft Expectation and Deep Maximization for Image Feature Detection [68.8204255655161]
We propose SEDM, an iterative semi-supervised learning process that flips the question and first looks for repeatable 3D points, then trains a detector to localize them in image space.
Our results show that this new model trained using SEDM is able to better localize the underlying 3D points in a scene.
arXiv Detail & Related papers (2021-04-21T00:35:32Z) - Gaussian Process Gradient Maps for Loop-Closure Detection in
Unstructured Planetary Environments [17.276441789710574]
The ability to recognize previously mapped locations is an essential feature for autonomous systems.
Unstructured planetary-like environments pose a major challenge to these systems due to the similarity of the terrain.
This paper presents a method to solve the loop closure problem using only spatial information.
arXiv Detail & Related papers (2020-09-01T04:41:40Z) - Leveraging Planar Regularities for Point Line Visual-Inertial Odometry [13.51108336267342]
With monocular Visual-Inertial Odometry (VIO) system, 3D point cloud and camera motion can be estimated simultaneously.
We propose PLP-VIO, which exploits point features and line features as well as plane regularities.
The effectiveness of the proposed method is verified on both synthetic data and public datasets.
arXiv Detail & Related papers (2020-04-16T18:20:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.