Maximum Covariance Unfolding Regression: A Novel Covariate-based
Manifold Learning Approach for Point Cloud Data
- URL: http://arxiv.org/abs/2303.17852v1
- Date: Fri, 31 Mar 2023 07:29:36 GMT
- Title: Maximum Covariance Unfolding Regression: A Novel Covariate-based
Manifold Learning Approach for Point Cloud Data
- Authors: Qian Wang, Kamran Paynabar
- Abstract summary: Point cloud data are widely used in manufacturing applications for process inspection, modeling, monitoring and optimization.
The state-of-art tensor regression techniques have effectively been used for analysis of structured point cloud data.
However, these techniques are not capable of handling unstructured point cloud data.
- Score: 11.34706571302446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud data are widely used in manufacturing applications for process
inspection, modeling, monitoring and optimization. The state-of-art tensor
regression techniques have effectively been used for analysis of structured
point cloud data, where the measurements on a uniform grid can be formed into a
tensor. However, these techniques are not capable of handling unstructured
point cloud data that are often in the form of manifolds. In this paper, we
propose a nonlinear dimension reduction approach named Maximum Covariance
Unfolding Regression that is able to learn the low-dimensional (LD) manifold of
point clouds with the highest correlation with explanatory covariates. This LD
manifold is then used for regression modeling and process optimization based on
process variables. The performance of the proposed method is subsequently
evaluated and compared with benchmark methods through simulations and a case
study of steel bracket manufacturing.
Related papers
- Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling [22.256068524699472]
In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues.
We combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution.
Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
arXiv Detail & Related papers (2024-08-13T08:09:05Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Implicit Manifold Gaussian Process Regression [49.0787777751317]
Gaussian process regression is widely used to provide well-calibrated uncertainty estimates.
It struggles with high-dimensional data because of the implicit low-dimensional manifold upon which the data actually lies.
In this paper we propose a technique capable of inferring implicit structure directly from data (labeled and unlabeled) in a fully differentiable way.
arXiv Detail & Related papers (2023-10-30T09:52:48Z) - Multi-Linear Kernel Regression and Imputation in Data Manifolds [12.15802365851407]
This paper introduces an efficient multi-linear nonparametric approximation framework for data regression and imputation, and its application to dynamic magnetic-resonance imaging (dMRI)
Data features are assumed to reside in or close to a smooth manifold embedded in a kernel reproducing Hilbert space. Landmark points are identified to describe the point cloud of features by linear approximating patches which mimic the concept of tangent spaces to smooth.
The multi-linear model effects dimensionality reduction, enables efficient computations, and extracts data patterns and their geometry without any training data or additional information.
arXiv Detail & Related papers (2023-04-06T12:58:52Z) - A Provably Efficient Model-Free Posterior Sampling Method for Episodic
Reinforcement Learning [50.910152564914405]
Existing posterior sampling methods for reinforcement learning are limited by being model-based or lack worst-case theoretical guarantees beyond linear MDPs.
This paper proposes a new model-free formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees.
arXiv Detail & Related papers (2022-08-23T12:21:01Z) - RMFGP: Rotated Multi-fidelity Gaussian process with Dimension Reduction
for High-dimensional Uncertainty Quantification [12.826754199680474]
Multi-fidelity modelling enables accurate inference even when only a small set of accurate data is available.
By combining the realizations of the high-fidelity model with one or more low-fidelity models, the multi-fidelity method can make accurate predictions of quantities of interest.
This paper proposes a new dimension reduction framework based on rotated multi-fidelity Gaussian process regression and a Bayesian active learning scheme.
arXiv Detail & Related papers (2022-04-11T01:20:35Z) - Generalised Latent Assimilation in Heterogeneous Reduced Spaces with
Machine Learning Surrogate Models [10.410970649045943]
We develop a system which combines reduced-order surrogate models with a novel data assimilation technique.
Generalised Latent Assimilation can benefit both the efficiency provided by the reduced-order modelling and the accuracy of data assimilation.
arXiv Detail & Related papers (2022-04-07T15:13:12Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.