Learning Iterative Robust Transformation Synchronization
- URL: http://arxiv.org/abs/2111.00728v1
- Date: Mon, 1 Nov 2021 07:03:14 GMT
- Title: Learning Iterative Robust Transformation Synchronization
- Authors: Zi Jian Yew and Gim Hee Lee
- Abstract summary: We propose to use graph neural networks (GNNs) to learn transformation synchronization.
In this work, we avoid handcrafting robust loss functions, and propose to use graph neural networks (GNNs) to learn transformation synchronization.
- Score: 71.73273007900717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformation Synchronization is the problem of recovering absolute
transformations from a given set of pairwise relative motions. Despite its
usefulness, the problem remains challenging due to the influences from noisy
and outlier relative motions, and the difficulty to model analytically and
suppress them with high fidelity. In this work, we avoid handcrafting robust
loss functions, and propose to use graph neural networks (GNNs) to learn
transformation synchronization. Unlike previous works which use complicated
multi-stage pipelines, we use an iterative approach where each step consists of
a single weight-shared message passing layer that refines the absolute poses
from the previous iteration by predicting an incremental update in the tangent
space. To reduce the influence of outliers, the messages are weighted before
aggregation. Our iterative approach alleviates the need for an explicit
initialization step and performs well with identity initial poses. Although our
approach is simple, we show that it performs favorably against existing
handcrafted and learned synchronization methods through experiments on both
SO(3) and SE(3) synchronization.
Related papers
- Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates [1.9241821314180372]
One major shortcoming of backpropagation is the interlocking between the forward and backward phases of the algorithm.
We propose a method that parallelises SGD updates across the layers of a model by asynchronously updating them from multiple threads.
We show that this approach yields close to state-of-theart results while running up to 2.97x faster than Hogwild! scaled on multiple devices.
arXiv Detail & Related papers (2024-10-08T12:32:36Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Symmetrical SyncMap for Imbalanced General Chunking Problems [11.26120401279973]
We show how to create dynamical equations and attractor-repeller points which are stable over the long run.
Main idea is to apply equal updates from negative and positive feedback loops by symmetrical activation.
Our algorithm surpasses or ties other unsupervised state-of-the-art baselines in all 12 imbalanced CGCPs.
arXiv Detail & Related papers (2023-10-16T04:03:36Z) - Shuffled Autoregression For Motion Interpolation [53.61556200049156]
This work aims to provide a deep-learning solution for the motion task.
We propose a novel framework, referred to as emphShuffled AutoRegression, which expands the autoregression to generate in arbitrary (shuffled) order.
We also propose an approach to constructing a particular kind of dependency graph, with three stages assembled into an end-to-end spatial-temporal motion Transformer.
arXiv Detail & Related papers (2023-06-10T07:14:59Z) - Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks [50.2033914945157]
We present a representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
The framework consists of three stages: estimating pairwise intensity functions, learning a projection which minimises a notion of intensity reconstruction error.
Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses.
arXiv Detail & Related papers (2023-06-09T15:38:25Z) - Rotation Synchronization via Deep Matrix Factorization [24.153207403324917]
We focus on the formulation of rotation synchronization via neural networks.
Inspired by deep matrix completion, we express rotation synchronization in terms of matrix factorization with a deep neural network.
Our formulation exhibits implicit regularization properties and, more importantly, is unsupervised.
arXiv Detail & Related papers (2023-05-09T08:46:05Z) - Pose Uncertainty Aware Movement Synchrony Estimation via
Spatial-Temporal Graph Transformer [7.053333608725945]
Movement synchrony reflects the coordination of body movements between interacting dyads.
This paper proposes a skeleton-based graph transformer for movement synchrony estimation.
Our method achieved an overall accuracy of 88.98% and surpassed its counterparts by a wide margin.
arXiv Detail & Related papers (2022-08-01T22:35:32Z) - Stable and memory-efficient image recovery using monotone operator
learning (MOL) [24.975981795360845]
We introduce a monotone deep equilibrium learning framework for large-scale inverse problems in imaging.
The proposed algorithm relies on forward-backward splitting, where each iteration consists of a gradient descent involving the score function and a conjugate gradient algorithm to encourage data consistency.
Experiments show that the proposed scheme can offer improved performance in 3D settings while being stable in the presence of input perturbations.
arXiv Detail & Related papers (2022-06-06T21:56:11Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - On the Robustness of Multi-View Rotation Averaging [77.09542018140823]
We introduce the $epsilon$-cycle consistency term into the solver.
We implicitly constrain the negative effect of erroneous measurements by weight reducing.
Experiment results demonstrate that our proposed approach outperforms state of the arts on various benchmarks.
arXiv Detail & Related papers (2021-02-09T05:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.