TpopT: Efficient Trainable Template Optimization on Low-Dimensional
Manifolds
- URL: http://arxiv.org/abs/2310.10039v1
- Date: Mon, 16 Oct 2023 03:51:13 GMT
- Title: TpopT: Efficient Trainable Template Optimization on Low-Dimensional
Manifolds
- Authors: Jingkai Yan, Shiyu Wang, Xinyu Rain Wei, Jimmy Wang, Zsuzsanna
M\'arka, Szabolcs M\'arka, John Wright
- Abstract summary: A family of approaches, exemplified by template matching, aims to cover the search space with a dense template bank.
While simple and highly interpretable, it suffers from poor computational efficiency due to unfavorable scaling in the signal space dimensionality.
We study TpopT as an alternative scalable framework for detecting low-dimensional families of signals which maintains high interpretability.
- Score: 5.608047449631387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In scientific and engineering scenarios, a recurring task is the detection of
low-dimensional families of signals or patterns. A classic family of
approaches, exemplified by template matching, aims to cover the search space
with a dense template bank. While simple and highly interpretable, it suffers
from poor computational efficiency due to unfavorable scaling in the signal
space dimensionality. In this work, we study TpopT (TemPlate OPTimization) as
an alternative scalable framework for detecting low-dimensional families of
signals which maintains high interpretability. We provide a theoretical
analysis of the convergence of Riemannian gradient descent for TpopT, and prove
that it has a superior dimension scaling to covering. We also propose a
practical TpopT framework for nonparametric signal sets, which incorporates
techniques of embedding and kernel interpolation, and is further configurable
into a trainable network architecture by unrolled optimization. The proposed
trainable TpopT exhibits significantly improved efficiency-accuracy tradeoffs
for gravitational wave detection, where matched filtering is currently a method
of choice. We further illustrate the general applicability of this approach
with experiments on handwritten digit data.
Related papers
- On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs) [12.810268045479992]
We study the universal approximation and sample complexity of the DiTs score function.
We show that latent DiTs have the potential to bypass the challenges associated with the high dimensionality of initial data.
arXiv Detail & Related papers (2024-07-01T08:34:40Z) - Diffusion Stochastic Optimization for Min-Max Problems [33.73046548872663]
The optimistic gradient method is useful in addressing minimax optimization problems.
Motivated by the observation that the conventional version suffers from the need for a large batch size, we introduce and analyze a new formulation termed Samevareps-generativeOGOG.
arXiv Detail & Related papers (2024-01-26T01:16:59Z) - Flow-based Distributionally Robust Optimization [23.232731771848883]
We present a framework, called $textttFlowDRO$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets.
We aim to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.
We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy.
arXiv Detail & Related papers (2023-10-30T03:53:31Z) - Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks [2.1797442801107056]
Undirected Weighted Network (UWN) is commonly found in big data-related applications.
Existing models fail in either modeling its intrinsic symmetry or low-data density.
Proximal Symmetric Nonnegative Latent-factor-analysis model is proposed.
arXiv Detail & Related papers (2023-06-06T13:03:24Z) - Laplacian-based Cluster-Contractive t-SNE for High Dimensional Data
Visualization [20.43471678277403]
We propose LaptSNE, a new graph-based dimensionality reduction method based on t-SNE.
Specifically, LaptSNE leverages the eigenvalue information of the graph Laplacian to shrink the potential clusters in the low-dimensional embedding.
We show how to calculate the gradient analytically, which may be of broad interest when considering optimization with Laplacian-composited objective.
arXiv Detail & Related papers (2022-07-25T14:10:24Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.