High-Throughput and Accurate 3D Scanning of Cattle Using Time-of-Flight
Sensors and Deep Learning
- URL: http://arxiv.org/abs/2308.03861v1
- Date: Mon, 7 Aug 2023 18:15:03 GMT
- Title: High-Throughput and Accurate 3D Scanning of Cattle Using Time-of-Flight
Sensors and Deep Learning
- Authors: Gbenga Omotara, Seyed Mohamad Ali Tousi, Jared Decker, Derek Brake,
Guilherme N. DeSouza
- Abstract summary: We introduce a high throughput 3D scanning solution specifically designed to measure cattle phenotypes.
This scanner leverages an array of depth sensors, i.e. time-of-flight (Tof) sensors, each governed by dedicated embedded devices.
The system excels at generating high-fidelity 3D point clouds, thus facilitating an accurate mesh that faithfully reconstructs the cattle geometry on the fly.
- Score: 1.2599533416395765
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a high throughput 3D scanning solution specifically designed to
precisely measure cattle phenotypes. This scanner leverages an array of depth
sensors, i.e. time-of-flight (Tof) sensors, each governed by dedicated embedded
devices. The system excels at generating high-fidelity 3D point clouds, thus
facilitating an accurate mesh that faithfully reconstructs the cattle geometry
on the fly. In order to evaluate the performance of our system, we have
implemented a two-fold validation process. Initially, we test the scanner's
competency in determining volume and surface area measurements within a
controlled environment featuring known objects. Secondly, we explore the impact
and necessity of multi-device synchronization when operating a series of
time-of-flight sensors. Based on the experimental results, the proposed system
is capable of producing high-quality meshes of untamed cattle for livestock
studies.
Related papers
- PointHPS: Cascaded 3D Human Pose and Shape Estimation from Point Clouds [99.60575439926963]
We propose a principled framework, PointHPS, for accurate 3D HPS from point clouds captured in real-world settings.
PointHPS iteratively refines point features through a cascaded architecture.
Extensive experiments demonstrate that PointHPS, with its powerful point feature extraction and processing scheme, outperforms State-of-the-Art methods.
arXiv Detail & Related papers (2023-08-28T11:10:14Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - 3D Harmonic Loss: Towards Task-consistent and Time-friendly 3D Object
Detection on Edge for Intelligent Transportation System [28.55894241049706]
We propose a 3D harmonic loss function to relieve the pointcloud based inconsistent predictions.
Our proposed method considerably improves the performance than benchmark models.
Our code is open-source and publicly available.
arXiv Detail & Related papers (2022-11-07T10:11:48Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - Deep Continuous Fusion for Multi-Sensor 3D Object Detection [103.5060007382646]
We propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization.
We design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution.
arXiv Detail & Related papers (2020-12-20T18:43:41Z) - It's All Around You: Range-Guided Cylindrical Network for 3D Object
Detection [4.518012967046983]
This work presents a novel approach for analyzing 3D data produced by 360-degree depth scanners.
We introduce a novel notion of range-guided convolutions, adapting the receptive field by distance from the ego vehicle and the object's scale.
Our network demonstrates powerful results on the nuScenes challenge, comparable to current state-of-the-art architectures.
arXiv Detail & Related papers (2020-12-05T21:02:18Z) - Boundary-Aware Dense Feature Indicator for Single-Stage 3D Object
Detection from Point Clouds [32.916690488130506]
We propose a universal module that helps 3D detectors focus on the densest region of the point clouds in a boundary-aware manner.
Experiments on KITTI dataset show that DENFI improves the performance of the baseline single-stage detector remarkably.
arXiv Detail & Related papers (2020-04-01T01:21:23Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z) - siaNMS: Non-Maximum Suppression with Siamese Networks for Multi-Camera
3D Object Detection [65.03384167873564]
A siamese network is integrated into the pipeline of a well-known 3D object detector approach.
associations are exploited to enhance the 3D box regression of the object.
The experimental evaluation on the nuScenes dataset shows that the proposed method outperforms traditional NMS approaches.
arXiv Detail & Related papers (2020-02-19T15:32:38Z) - Spatiotemporal Camera-LiDAR Calibration: A Targetless and Structureless
Approach [32.15405927679048]
We propose a targetless and structureless camera-DAR calibration method.
Our method combines a closed-form solution with a structureless bundle where the coarse-to-fine approach does not require an initial adjustment on the temporal parameters.
We demonstrate the accuracy and robustness of the proposed method through both simulation and real data experiments.
arXiv Detail & Related papers (2020-01-17T07:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.