Robustifying Generalizable Implicit Shape Networks with a Tunable
Non-Parametric Model
- URL: http://arxiv.org/abs/2311.12967v1
- Date: Tue, 21 Nov 2023 20:12:29 GMT
- Title: Robustifying Generalizable Implicit Shape Networks with a Tunable
Non-Parametric Model
- Authors: Amine Ouasfi and Adnane Boukhayma
- Abstract summary: Generalizable models for implicit shape reconstruction from unoriented point cloud suffer from generalization issues.
We propose here an efficient mechanism to remedy some of these limitations at test time.
We demonstrate the improvement obtained through our method with respect to baselines and the state-of-the-art using synthetic and real data.
- Score: 10.316008740970037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feedforward generalizable models for implicit shape reconstruction from
unoriented point cloud present multiple advantages, including high performance
and inference speed. However, they still suffer from generalization issues,
ranging from underfitting the input point cloud, to misrepresenting samples
outside of the training data distribution, or with toplogies unseen at
training. We propose here an efficient mechanism to remedy some of these
limitations at test time. We combine the inter-shape data prior of the network
with an intra-shape regularization prior of a Nystr\"om Kernel Ridge
Regression, that we further adapt by fitting its hyperprameters to the current
shape. The resulting shape function defined in a shape specific Reproducing
Kernel Hilbert Space benefits from desirable stability and efficiency
properties and grants a shape adaptive expressiveness-robustness trade-off. We
demonstrate the improvement obtained through our method with respect to
baselines and the state-of-the-art using synthetic and real data.
Related papers
- Self-Supervised Implicit Attention Priors for Point Cloud Reconstruction [7.652381699040464]
We introduce an implicit self-prior approach that distills a shape-specific prior directly from the input point cloud itself.<n>We show this hybrid strategy preserves fine geometric details in the input data, while leveraging the learned prior to regularize sparse regions.
arXiv Detail & Related papers (2025-11-06T23:01:22Z) - Elastic ViTs from Pretrained Models without Retraining [74.5386166956142]
Vision foundation models achieve remarkable performance but are only available in a limited set of pre-determined sizes.<n>We introduce SnapViT: Single-shot network approximation for pruned Vision Transformers.<n>Our approach efficiently combines gradient information with cross-network structure correlations, approximated via an evolutionary algorithm.
arXiv Detail & Related papers (2025-10-20T16:15:03Z) - Nonparametric Data Attribution for Diffusion Models [57.820618036556084]
Data attribution for generative models seeks to quantify the influence of individual training examples on model outputs.<n>We propose a nonparametric attribution method that operates entirely on data, measuring influence via patch-level similarity between generated and training images.
arXiv Detail & Related papers (2025-10-16T03:37:16Z) - Preconditioned Deformation Grids [41.79220966392968]
We introduce Preconditioned Deformation Grids, a novel technique for estimating coherent deformation fields directly from unstructured point cloud sequences.<n>Our method achieves superior results, particularly for long sequences, compared to state-of-the-art techniques.
arXiv Detail & Related papers (2025-09-22T17:59:55Z) - Adaptive Point-Prompt Tuning: Fine-Tuning Heterogeneous Foundation Models for 3D Point Cloud Analysis [51.37795317716487]
We propose the Adaptive Point-Prompt Tuning (APPT) method, which fine-tunes pre-trained models with a modest number of parameters.<n>We convert raw point clouds into point embeddings by aggregating local geometry to capture spatial features followed by linear layers.<n>To calibrate self-attention across source domains of any modality to 3D, we introduce a prompt generator that shares weights with the point embedding module.
arXiv Detail & Related papers (2025-08-30T06:02:21Z) - Occlusion-aware Non-Rigid Point Cloud Registration via Unsupervised Neural Deformation Correntropy [25.660967523504855]
Occlusion-Aware Registration (OAR) is an unsupervised method for non-rigidly aligning point clouds.
We present a theoretical analysis and establish the relationship between the maximum correntropy criterion and the commonly used Chamfer distance.
Our method achieves superior or competitive performance compared to existing approaches.
arXiv Detail & Related papers (2025-02-15T07:27:15Z) - DispFormer: A Pretrained Transformer Incorporating Physical Constraints for Dispersion Curve Inversion [56.64622091009756]
This study introduces DispFormer, a transformer-based neural network for $v_s$ profile inversion from Rayleigh-wave phase and group dispersion curves.<n>DispFormer processes dispersion data independently at each period, allowing it to handle varying lengths without requiring network modifications or strict alignment between training and testing datasets.
arXiv Detail & Related papers (2025-01-08T09:08:24Z) - Physics-Informed Geometric Operators to Support Surrogate, Dimension Reduction and Generative Models for Engineering Design [38.00713966087315]
We propose a set of physics-informed geometric operators (GOs) to enrich the geometric data provided for training surrogate/discriminative models.
GOs exploit the differential and integral properties of shapes to infuse high-level intrinsic geometric information and physics into the feature vector used for training.
arXiv Detail & Related papers (2024-07-10T12:50:43Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Learning-Based Biharmonic Augmentation for Point Cloud Classification [79.13962913099378]
Biharmonic Augmentation (BA) is a novel and efficient data augmentation technique.
BA diversifies point cloud data by imposing smooth non-rigid deformations on existing 3D structures.
We present AdvTune, an advanced online augmentation system that integrates adversarial training.
arXiv Detail & Related papers (2023-11-10T14:04:49Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - Reduced Representation of Deformation Fields for Effective Non-rigid
Shape Matching [26.77241999731105]
We present a novel approach for computing correspondences between non-rigid objects by exploiting a reduced representation of deformation fields.
By letting the network learn deformation parameters at a sparse set of positions in space (nodes), we reconstruct the continuous deformation field in a closed-form with guaranteed smoothness.
Our model has high expressive power and is able to capture complex deformations.
arXiv Detail & Related papers (2022-11-26T16:11:17Z) - Leveraging Equivariant Features for Absolute Pose Regression [9.30597356471664]
We show that a translation and rotation equivariant Convolutional Neural Network directly induces representations of camera motions into the feature space.
We then show that this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations.
arXiv Detail & Related papers (2022-04-05T12:44:20Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Hamiltonian Dynamics for Real-World Shape Interpolation [66.47407593823208]
We revisit the classical problem of 3D shape and propose a novel, physically plausible approach based on Hamiltonian dynamics.
Our method yields exactly volume preserving intermediate shapes, avoids self-intersections and is scalable to high resolution scans.
arXiv Detail & Related papers (2020-04-10T18:38:52Z) - Deep Manifold Prior [37.725563645899584]
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent.
We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks.
arXiv Detail & Related papers (2020-04-08T20:47:56Z) - Implicit Geometric Regularization for Learning Shapes [34.052738965233445]
We offer a new paradigm for computing high fidelity implicit neural representations directly from raw data.
We show that our method leads to state of the art implicit neural representations with higher level-of-details and fidelity compared to previous methods.
arXiv Detail & Related papers (2020-02-24T07:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.