Moby: Empowering 2D Models for Efficient Point Cloud Analytics on the
Edge
- URL: http://arxiv.org/abs/2302.09221v3
- Date: Tue, 5 Sep 2023 02:17:19 GMT
- Title: Moby: Empowering 2D Models for Efficient Point Cloud Analytics on the
Edge
- Authors: Jingzong Li, Yik Hong Cai, Libin Liu, Yu Mao, Chun Jason Xue, Hong Xu
- Abstract summary: 3D object detection plays a pivotal role in many applications, most notably autonomous driving and robotics.
With limited computation power, it is challenging to execute 3D detection on the edge using highly complex neural networks.
Common approaches such as offloading to the cloud induce significant latency overheads due to the large amount of point cloud data during transmission.
We present Moby, a novel system that demonstrates the feasibility and potential of our approach.
- Score: 11.588467580653608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D object detection plays a pivotal role in many applications, most notably
autonomous driving and robotics. These applications are commonly deployed on
edge devices to promptly interact with the environment, and often require near
real-time response. With limited computation power, it is challenging to
execute 3D detection on the edge using highly complex neural networks. Common
approaches such as offloading to the cloud induce significant latency overheads
due to the large amount of point cloud data during transmission. To resolve the
tension between wimpy edge devices and compute-intensive inference workloads,
we explore the possibility of empowering fast 2D detection to extrapolate 3D
bounding boxes. To this end, we present Moby, a novel system that demonstrates
the feasibility and potential of our approach. We design a transformation
pipeline for Moby that generates 3D bounding boxes efficiently and accurately
based on 2D detection results without running 3D detectors. Further, we devise
a frame offloading scheduler that decides when to launch the 3D detector
judiciously in the cloud to avoid the errors from accumulating. Extensive
evaluations on NVIDIA Jetson TX2 with real-world autonomous driving datasets
demonstrate that Moby offers up to 91.9% latency improvement with modest
accuracy loss over state of the art.
Related papers
- DetZero: Rethinking Offboard 3D Object Detection with Long-term
Sequential Point Clouds [55.755450273390004]
Existing offboard 3D detectors always follow a modular pipeline design to take advantage of unlimited sequential point clouds.
We have found that the full potential of offboard 3D detectors is not explored mainly due to two reasons: (1) the onboard multi-object tracker cannot generate sufficient complete object trajectories, and (2) the motion state of objects poses an inevitable challenge for the object-centric refining stage.
To tackle these problems, we propose a novel paradigm of offboard 3D object detection, named DetZero.
arXiv Detail & Related papers (2023-06-09T16:42:00Z) - Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection [85.08249413137558]
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors.
Small, distant, and incomplete objects with sparse or few points are often hard to detect.
We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space.
arXiv Detail & Related papers (2022-11-23T16:01:06Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - PiFeNet: Pillar-Feature Network for Real-Time 3D Pedestrian Detection
from Point Cloud [64.12626752721766]
We present PiFeNet, an efficient real-time 3D detector for pedestrian detection from point clouds.
We address two challenges that 3D object detection frameworks encounter when detecting pedestrians: low of pillar features and small occupation areas of pedestrians in point clouds.
Our approach is ranked 1st in KITTI pedestrian BEV and 3D leaderboards while running at 26 frames per second (FPS), and achieves state-of-the-art performance on Nuscenes detection benchmark.
arXiv Detail & Related papers (2021-12-31T13:41:37Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion
Forecasting with a Single Convolutional Net [93.51773847125014]
We propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor.
Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world.
arXiv Detail & Related papers (2020-12-22T22:43:35Z) - Boundary-Aware Dense Feature Indicator for Single-Stage 3D Object
Detection from Point Clouds [32.916690488130506]
We propose a universal module that helps 3D detectors focus on the densest region of the point clouds in a boundary-aware manner.
Experiments on KITTI dataset show that DENFI improves the performance of the baseline single-stage detector remarkably.
arXiv Detail & Related papers (2020-04-01T01:21:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.