Shape-biased Texture Agnostic Representations for Improved Textureless and Metallic Object Detection and 6D Pose Estimation
- URL: http://arxiv.org/abs/2402.04878v2
- Date: Tue, 23 Jul 2024 14:18:48 GMT
- Title: Shape-biased Texture Agnostic Representations for Improved Textureless and Metallic Object Detection and 6D Pose Estimation
- Authors: Peter Hönig, Stefan Thalhammer, Jean-Baptiste Weibel, Matthias Hirschmanner, Markus Vincze,
- Abstract summary: textureless and metallic objects still a significant challenge due to few visual cues and the texture bias of CNNs.
We propose a strategy for inducing a shape bias to CNN training.
This methodology allows seamless data rendering and results in training data without consistent textural surfaces.
- Score: 9.227450931458907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in machine learning have greatly benefited object detection and 6D pose estimation. However, textureless and metallic objects still pose a significant challenge due to few visual cues and the texture bias of CNNs. To address his issue, we propose a strategy for inducing a shape bias to CNN training. In particular, by randomizing textures applied to object surfaces during data rendering, we create training data without consistent textural cues. This methodology allows for seamless integration into existing data rendering engines, and results in negligible computational overhead for data rendering and network training. Our findings demonstrate that the shape bias we induce via randomized texturing, improves over existing approaches using style transfer. We evaluate with three detectors and two pose estimators. For the most recent object detector and for pose estimation in general, estimation accuracy improves for textureless and metallic objects. Additionally we show that our approach increases the pose estimation accuracy in the presence of image noise and strong illumination changes. Code and datasets are publicly available at github.com/hoenigpeter/randomized_texturing.
Related papers
- Real-time Free-view Human Rendering from Sparse-view RGB Videos using Double Unprojected Textures [87.80984588545589]
Real-time free-view human rendering from sparse-view RGB inputs is a challenging task due to the sensor scarcity and the tight time budget.
We present Double Unprojected Textures, which at the core disentangles coarse geometric deformation estimation from appearance synthesis.
arXiv Detail & Related papers (2024-12-17T18:57:38Z) - Inverse Neural Rendering for Explainable Multi-Object Tracking [35.072142773300655]
We recast 3D multi-object tracking from RGB cameras as an emphInverse Rendering (IR) problem.
We optimize an image loss over generative latent spaces that inherently disentangle shape and appearance properties.
We validate the generalization and scaling capabilities of our method by learning the generative prior exclusively from synthetic data.
arXiv Detail & Related papers (2024-04-18T17:37:53Z) - SplatPose & Detect: Pose-Agnostic 3D Anomaly Detection [18.796625355398252]
State-of-the-art algorithms are able to detect defects in increasingly difficult settings and data modalities.
We propose the novel 3D Gaussian splatting-based framework SplatPose which accurately estimates the pose of unseen views in a differentiable manner.
We achieve state-of-the-art results in both training and inference speed, and detection performance, even when using less training data than competing methods.
arXiv Detail & Related papers (2024-04-10T08:48:09Z) - What You See Is What You Detect: Towards better Object Densification in
3D detection [2.3436632098950456]
The widely-used full-shape completion approach actually leads to a higher error-upper bound especially for far away objects and small objects like pedestrians.
We introduce a visible part completion method that requires only 11.3% of the prediction points that previous methods generate.
To recover the dense representation, we propose a mesh-deformation-based method to augment the point set associated with visible foreground objects.
arXiv Detail & Related papers (2023-10-27T01:46:37Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown
Objects [89.2314092102403]
We present a near real-time method for 6-DoF tracking of an unknown object from a monocular RGBD video sequence.
Our method works for arbitrary rigid objects, even when visual texture is largely absent.
arXiv Detail & Related papers (2023-03-24T17:13:49Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - NeRF-Pose: A First-Reconstruct-Then-Regress Approach for
Weakly-supervised 6D Object Pose Estimation [44.42449011619408]
We present a weakly-supervised reconstruction-based pipeline, named NeRF-Pose, which needs only 2D object segmentation and known relative camera poses during training.
A NeRF-enabled RAN+SAC algorithm is used to estimate stable and accurate pose from the predicted correspondences.
Experiments on LineMod-Occlusion show that the proposed method has state-of-the-art accuracy in comparison to the best 6D pose estimation methods.
arXiv Detail & Related papers (2022-03-09T15:28:02Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.