Partial Wasserstein Adversarial Network for Non-rigid Point Set
Registration
- URL: http://arxiv.org/abs/2203.02227v1
- Date: Fri, 4 Mar 2022 10:23:48 GMT
- Title: Partial Wasserstein Adversarial Network for Non-rigid Point Set
Registration
- Authors: Zi-Ming Wang, Nan Xue, Ling Lei, Gui-Song Xia
- Abstract summary: Given two point sets, the problem of registration is to recover a transformation that matches one set to the other.
We formulate the registration problem as a partial distribution matching (PDM) problem, where the goal is to partially match the distributions represented by point sets in a metric space.
We propose a partial Wasserstein adversarial network (PWAN), which is able to approximate the PW discrepancy by a neural network, and minimize it by gradient descent.
- Score: 33.70389309762202
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given two point sets, the problem of registration is to recover a
transformation that matches one set to the other. This task is challenging due
to the presence of the large number of outliers, the unknown non-rigid
deformations and the large sizes of point sets. To obtain strong robustness
against outliers, we formulate the registration problem as a partial
distribution matching (PDM) problem, where the goal is to partially match the
distributions represented by point sets in a metric space. To handle large
point sets, we propose a scalable PDM algorithm by utilizing the efficient
partial Wasserstein-1 (PW) discrepancy. Specifically, we derive the
Kantorovich-Rubinstein duality for the PW discrepancy, and show its gradient
can be explicitly computed. Based on these results, we propose a partial
Wasserstein adversarial network (PWAN), which is able to approximate the PW
discrepancy by a neural network, and minimize it by gradient descent. In
addition, it also incorporates an efficient coherence regularizer for non-rigid
transformations to avoid unrealistic deformations. We evaluate PWAN on
practical point set registration tasks, and show that the proposed PWAN is
robust, scalable and performs more favorably than the state-of-the-art methods.
Related papers
- Partial Distribution Matching via Partial Wasserstein Adversarial Networks [35.48994933353969]
This paper studies the problem of distribution matching (DM), which is a fundamental machine learning problem seeking to robustly align two probability distributions.
Our approach is established on a relaxed formulation, called partial distribution matching (PDM), which seeks to match a fraction of the distributions instead of matching them completely.
Experiment results confirm that the proposed PWAN effectively produces highly robust matching results, performing better or on par with the state-of-the-art methods.
arXiv Detail & Related papers (2024-09-16T17:41:45Z) - SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid Registration [76.40993825836222]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.
The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - Edge Wasserstein Distance Loss for Oriented Object Detection [30.63435516524413]
We propose a novel oriented regression loss, Wasserstein Distance(EWD) loss, to alleviate the square-like problem.
Specifically, for the oriented box(OBox) representation, we choose a specially-designed distribution whose probability density function is only nonzero over the edges.
arXiv Detail & Related papers (2023-12-12T08:00:40Z) - Markovian Sliced Wasserstein Distances: Beyond Independent Projections [51.80527230603978]
We introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, which imposes a first-order Markov structure on projecting directions.
We compare distances with previous SW variants in various applications such as flows, color transfer, and deep generative modeling to demonstrate the favorable performance of MSW.
arXiv Detail & Related papers (2023-01-10T01:58:15Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Inference for Change Points in High Dimensional Mean Shift Models [10.307668909650449]
We consider the problem of constructing confidence intervals for the locations of change points in a high-dimensional mean shift model.
We develop a locally refitted least squares estimator and obtain component-wise and simultaneous rates of estimation of the underlying change points.
The results are established under a high dimensional scaling, allowing in the presence of diverging number of change points and under subexponential errors.
arXiv Detail & Related papers (2021-07-19T20:56:15Z) - Continuous Regularized Wasserstein Barycenters [51.620781112674024]
We introduce a new dual formulation for the regularized Wasserstein barycenter problem.
We establish strong duality and use the corresponding primal-dual relationship to parametrize the barycenter implicitly using the dual potentials of regularized transport problems.
arXiv Detail & Related papers (2020-08-28T08:28:06Z) - Schr\"{o}dinger PCA: On the Duality between Principal Component Analysis
and Schr\"{o}dinger Equation [4.230413425773648]
Principal component analysis (PCA) has achieved great success in unsupervised learning.
In particular, PCA will fail the spatial Gaussian Process (GP) model in the undersampling regime.
Counterly, by drawing the connection between PCA and Schr"odinger equation, we can not only attack the undersampling challenge but also compute in an efficient and decoupled way.
Our algorithm only requires variances of features and estimated correlation length as input, constructs the corresponding Schr"odinger equation, and solves it to obtain the energy eigenstates.
arXiv Detail & Related papers (2020-06-08T06:55:29Z) - Deep Semantic Matching with Foreground Detection and Cycle-Consistency [103.22976097225457]
We address weakly supervised semantic matching based on a deep network.
We explicitly estimate the foreground regions to suppress the effect of background clutter.
We develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent.
arXiv Detail & Related papers (2020-03-31T22:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.