Active Learning of Neural Collision Handler for Complex 3D Mesh
Deformations
- URL: http://arxiv.org/abs/2110.07727v1
- Date: Fri, 8 Oct 2021 04:08:31 GMT
- Title: Active Learning of Neural Collision Handler for Complex 3D Mesh
Deformations
- Authors: Qingyang Tan, Zherong Pan, Breannan Smith, Takaaki Shiratori, Dinesh
Manocha
- Abstract summary: We present a robust learning algorithm to detect and handle collisions in 3D deforming meshes.
Our approach outperforms supervised learning methods and achieves $93.8-98.1%$ accuracy.
- Score: 68.0524382279567
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a robust learning algorithm to detect and handle collisions in 3D
deforming meshes. Our collision detector is represented as a bilevel deep
autoencoder with an attention mechanism that identifies colliding mesh
sub-parts. We use a numerical optimization algorithm to resolve penetrations
guided by the network. Our learned collision handler can resolve collisions for
unseen, high-dimensional meshes with thousands of vertices. To obtain stable
network performance in such large and unseen spaces, we progressively insert
new collision data based on the errors in network inferences. We automatically
label these data using an analytical collision detector and progressively
fine-tune our detection networks. We evaluate our method for collision handling
of complex, 3D meshes coming from several datasets with different shapes and
topologies, including datasets corresponding to dressed and undressed human
poses, cloth simulations, and human hand poses acquired using multiview capture
systems. Our approach outperforms supervised learning methods and achieves
$93.8-98.1\%$ accuracy compared to the groundtruth by analytic methods.
Compared to prior learning methods, our approach results in a $5.16\%-25.50\%$
lower false negative rate in terms of collision checking and a $9.65\%-58.91\%$
higher success rate in collision handling.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Pattern-Aware Data Augmentation for LiDAR 3D Object Detection [7.394029879643516]
We propose pattern-aware ground truth sampling, a data augmentation technique that downsamples an object's point cloud based on the LiDAR's characteristics.
We improve the performance of PV-RCNN on the car class by more than 0.7 percent on the KITTI validation split at distances greater than 25 m.
arXiv Detail & Related papers (2021-11-30T19:14:47Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Angle Based Feature Learning in GNN for 3D Object Detection using Point
Cloud [4.3012765978447565]
We present new feature encoding methods for Detection of 3D objects in point clouds.
We used a graph neural network (GNN) for Detection of 3D objects namely cars, pedestrians, and cyclists.
arXiv Detail & Related papers (2021-08-02T10:56:02Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z) - Exploring the Capabilities and Limits of 3D Monocular Object Detection
-- A Study on Simulation and Real World Data [0.0]
3D object detection based on monocular camera data is key enabler for autonomous driving.
Recent deep learning methods show promising results to recover depth information from single images.
In this paper, we evaluate the performance of a 3D object detection pipeline which is parameterizable with different depth estimation configurations.
arXiv Detail & Related papers (2020-05-15T09:05:17Z) - Leveraging Uncertainties for Deep Multi-modal Object Detection in
Autonomous Driving [12.310862288230075]
This work presents a probabilistic deep neural network that combines LiDAR point clouds and RGB camera images for robust, accurate 3D object detection.
We explicitly model uncertainties in the classification and regression tasks, and leverage uncertainties to train the fusion network via a sampling mechanism.
arXiv Detail & Related papers (2020-02-01T14:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.