Symmetrical SyncMap for Imbalanced General Chunking Problems
- URL: http://arxiv.org/abs/2310.10045v1
- Date: Mon, 16 Oct 2023 04:03:36 GMT
- Title: Symmetrical SyncMap for Imbalanced General Chunking Problems
- Authors: Heng Zhang and Danilo Vasconcellos Vargas
- Abstract summary: We show how to create dynamical equations and attractor-repeller points which are stable over the long run.
Main idea is to apply equal updates from negative and positive feedback loops by symmetrical activation.
Our algorithm surpasses or ties other unsupervised state-of-the-art baselines in all 12 imbalanced CGCPs.
- Score: 11.26120401279973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, SyncMap pioneered an approach to learn complex structures from
sequences as well as adapt to any changes in underlying structures. This is
achieved by using only nonlinear dynamical equations inspired by neuron group
behaviors, i.e., without loss functions. Here we propose Symmetrical SyncMap
that goes beyond the original work to show how to create dynamical equations
and attractor-repeller points which are stable over the long run, even dealing
with imbalanced continual general chunking problems (CGCPs). The main idea is
to apply equal updates from negative and positive feedback loops by symmetrical
activation. We then introduce the concept of memory window to allow for more
positive updates. Our algorithm surpasses or ties other unsupervised
state-of-the-art baselines in all 12 imbalanced CGCPs with various
difficulties, including dynamically changing ones. To verify its performance in
real-world scenarios, we conduct experiments on several well-studied structure
learning problems. The proposed method surpasses substantially other methods in
3 out of 4 scenarios, suggesting that symmetrical activation plays a critical
role in uncovering topological structures and even hierarchies encoded in
temporal data.
Related papers
- Hybrid Functional Maps for Crease-Aware Non-Isometric Shape Matching [42.0728900164228]
We propose a novel approach of combining the non-orthogonal extrinsic basis of eigenfunctions of the elastic thin-shell hessian with the intrinsic ones of the Laplace-Beltrami operator (LBO) eigenmodes.
We show extensive evaluations across various supervised and unsupervised settings and demonstrate significant improvements.
arXiv Detail & Related papers (2023-12-06T18:41:01Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks [50.2033914945157]
We present a representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
The framework consists of three stages: estimating pairwise intensity functions, learning a projection which minimises a notion of intensity reconstruction error.
Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses.
arXiv Detail & Related papers (2023-06-09T15:38:25Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Temporal Difference Learning with Compressed Updates: Error-Feedback meets Reinforcement Learning [47.904127007515925]
We study a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction.
We prove that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic approximation guarantees as their counterparts.
Notably, these are the first finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling.
arXiv Detail & Related papers (2023-01-03T04:09:38Z) - Linearization and Identification of Multiple-Attractors Dynamical System
through Laplacian Eigenmaps [8.161497377142584]
We propose a Graph-based spectral clustering method that takes advantage of a velocity-augmented kernel to connect data-points belonging to the same dynamics.
We prove that there always exist a set of 2-dimensional embedding spaces in which the sub-dynamics are linear, and n-dimensional embedding where they are quasi-linear.
We learn a diffeomorphism from the Laplacian embedding space to the original space and show that the Laplacian embedding leads to good reconstruction accuracy and a faster training time.
arXiv Detail & Related papers (2022-02-18T12:43:25Z) - Multiway Non-rigid Point Cloud Registration via Learned Functional Map
Synchronization [105.14877281665011]
We present SyNoRiM, a novel way to register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds.
We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy.
arXiv Detail & Related papers (2021-11-25T02:37:59Z) - Learning Iterative Robust Transformation Synchronization [71.73273007900717]
We propose to use graph neural networks (GNNs) to learn transformation synchronization.
In this work, we avoid handcrafting robust loss functions, and propose to use graph neural networks (GNNs) to learn transformation synchronization.
arXiv Detail & Related papers (2021-11-01T07:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.