Multiple Metric Learning for Structured Data
- URL: http://arxiv.org/abs/2002.05747v1
- Date: Thu, 13 Feb 2020 19:11:32 GMT
- Title: Multiple Metric Learning for Structured Data
- Authors: Nicolo Colombo
- Abstract summary: We address the problem of merging graph and feature-space information while learning a metric from structured data.
We propose a new graph-based technique for optimizing under metric constraints.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of merging graph and feature-space information while
learning a metric from structured data. Existing algorithms tackle the problem
in an asymmetric way, by either extracting vectorized summaries of the graph
structure or adding hard constraints to feature-space algorithms. Following a
different path, we define a metric regression scheme where we train
metric-constrained linear combinations of dissimilarity matrices. The idea is
that the input matrices can be pre-computed dissimilarity measures obtained
from any kind of available data (e.g. node attributes or edge structure). As
the model inputs are distance measures, we do not need to assume the existence
of any underlying feature space. Main challenge is that metric constraints
(especially positive-definiteness and sub-additivity), are not automatically
respected if, for example, the coefficients of the linear combination are
allowed to be negative. Both positive and sub-additive constraints are linear
inequalities, but the computational complexity of imposing them scales as
O(D3), where D is the size of the input matrices (i.e. the size of the data
set). This becomes quickly prohibitive, even when D is relatively small. We
propose a new graph-based technique for optimizing under such constraints and
show that, in some cases, our approach may reduce the original computational
complexity of the optimization process by one order of magnitude. Contrarily to
existing methods, our scheme applies to any (possibly non-convex)
metric-constrained objective function.
Related papers
- Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Nonlinear Feature Aggregation: Two Algorithms driven by Theory [45.3190496371625]
Real-world machine learning applications are characterized by a huge number of features, leading to computational and memory issues.
We propose a dimensionality reduction algorithm (NonLinCFA) which aggregates non-linear transformations of features with a generic aggregation function.
We also test the algorithms on synthetic and real-world datasets, performing regression and classification tasks, showing competitive performances.
arXiv Detail & Related papers (2023-06-19T19:57:33Z) - A Deep Learning algorithm to accelerate Algebraic Multigrid methods in
Finite Element solvers of 3D elliptic PDEs [0.0]
We introduce a novel Deep Learning algorithm that minimizes the computational cost of the Algebraic multigrid method when used as a finite element solver.
We experimentally prove that the pooling successfully reduces the computational cost of processing a large sparse matrix and preserves the features needed for the regression task at hand.
arXiv Detail & Related papers (2023-04-21T09:18:56Z) - Interpretable Linear Dimensionality Reduction based on Bias-Variance
Analysis [45.3190496371625]
We propose a principled dimensionality reduction approach that maintains the interpretability of the resulting features.
In this way, all features are considered, the dimensionality is reduced and the interpretability is preserved.
arXiv Detail & Related papers (2023-03-26T14:30:38Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Towards Neural Sparse Linear Solvers [0.0]
We propose neural sparse linear solvers to learn approximate solvers for sparse symmetric linear systems.
Our method relies on representing a sparse symmetric linear system as an undirected weighted graph.
We test sparse linear solvers on static linear analysis problems from structural engineering.
arXiv Detail & Related papers (2022-03-14T09:17:02Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.