VoxDet: Voxel Learning for Novel Instance Detection
- URL: http://arxiv.org/abs/2305.17220v4
- Date: Sun, 15 Oct 2023 16:10:30 GMT
- Title: VoxDet: Voxel Learning for Novel Instance Detection
- Authors: Bowen Li, Jiashun Wang, Yaoyu Hu, Chen Wang, Sebastian Scherer
- Abstract summary: VoxDet is a 3D geometry-aware framework for detecting unseen instances.
Our framework fully utilizes the strong 3D voxel representation and reliable voxel matching mechanism.
To the best of our knowledge, VoxDet is the first to incorporate implicit 3D knowledge for 2D novel instance detection tasks.
- Score: 15.870525460969553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting unseen instances based on multi-view templates is a challenging
problem due to its open-world nature. Traditional methodologies, which
primarily rely on 2D representations and matching techniques, are often
inadequate in handling pose variations and occlusions. To solve this, we
introduce VoxDet, a pioneer 3D geometry-aware framework that fully utilizes the
strong 3D voxel representation and reliable voxel matching mechanism. VoxDet
first ingeniously proposes template voxel aggregation (TVA) module, effectively
transforming multi-view 2D images into 3D voxel features. By leveraging
associated camera poses, these features are aggregated into a compact 3D
template voxel. In novel instance detection, this voxel representation
demonstrates heightened resilience to occlusion and pose variations. We also
discover that a 3D reconstruction objective helps to pre-train the 2D-3D
mapping in TVA. Second, to quickly align with the template voxel, VoxDet
incorporates a Query Voxel Matching (QVM) module. The 2D queries are first
converted into their voxel representation with the learned 2D-3D mapping. We
find that since the 3D voxel representations encode the geometry, we can first
estimate the relative rotation and then compare the aligned voxels, leading to
improved accuracy and efficiency. In addition to method, we also introduce the
first instance detection benchmark, RoboTools, where 20 unique instances are
video-recorded with camera extrinsic. RoboTools also provides 24 challenging
cluttered scenarios with more than 9k box annotations. Exhaustive experiments
are conducted on the demanding LineMod-Occlusion, YCB-video, and RoboTools
benchmarks, where VoxDet outperforms various 2D baselines remarkably with
faster speed. To the best of our knowledge, VoxDet is the first to incorporate
implicit 3D knowledge for 2D novel instance detection tasks.
Related papers
- 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - VoxelNeXt: Fully Sparse VoxelNet for 3D Object Detection and Tracking [78.25819070166351]
We propose VoxelNext for fully sparse 3D object detection.
Our core insight is to predict objects directly based on sparse voxel features, without relying on hand-crafted proxies.
Our strong sparse convolutional network VoxelNeXt detects and tracks 3D objects through voxel features entirely.
arXiv Detail & Related papers (2023-03-20T17:40:44Z) - VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene
Completion [129.5975573092919]
VoxFormer is a Transformer-based semantic scene completion framework.
It can output complete 3D semantics from only 2D images.
Our framework outperforms the state of the art with a relative improvement of 20.0% in geometry and 18.1% in semantics.
arXiv Detail & Related papers (2023-02-23T18:59:36Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - Voxelized 3D Feature Aggregation for Multiview Detection [15.465855460519446]
We propose VFA, voxelized 3D feature aggregation, for feature transformation and aggregation in multi-view detection.
Specifically, we voxelize the 3D space, project the voxels onto each camera view, and associate 2D features with these projected voxels.
This allows us to identify and then aggregate 2D features along the same vertical line, alleviating projection distortions to a large extent.
arXiv Detail & Related papers (2021-12-07T03:38:50Z) - Voxel-based 3D Detection and Reconstruction of Multiple Objects from a
Single Image [22.037472446683765]
We learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator.
Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space.
We devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation.
arXiv Detail & Related papers (2021-11-04T18:30:37Z) - HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object
Detection [39.64891219500416]
3D object detection methods exploit either voxel-based or point-based features to represent 3D objects in a scene.
We introduce in this paper a novel single-stage 3D detection method having the merit of both voxel-based and point-based features.
arXiv Detail & Related papers (2021-04-02T06:34:49Z) - Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection [99.16162624992424]
We devise a simple but effective voxel-based framework, named Voxel R-CNN.
By taking full advantage of voxel features in a two stage approach, our method achieves comparable detection accuracy with state-of-the-art point-based models.
Our results show that Voxel R-CNN delivers a higher detection accuracy while maintaining a realtime frame processing rate, emphi.e, at a speed of 25 FPS on an NVIDIA 2080 Ti GPU.
arXiv Detail & Related papers (2020-12-31T17:02:46Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.