Wasserstein Nonnegative Tensor Factorization with Manifold
Regularization
- URL: http://arxiv.org/abs/2401.01842v1
- Date: Wed, 3 Jan 2024 17:20:27 GMT
- Title: Wasserstein Nonnegative Tensor Factorization with Manifold
Regularization
- Authors: Jianyu Wang, Linruize Tang
- Abstract summary: We introduce Wasserstein manifold nonnegative tensor factorization (WMNTF)
We use Wasserstein distance (a.k.a Earth Mover's distance or Optimal Transport distance) as a metric and add a graph regularizer to a latent factor.
Experimental results demonstrate the effectiveness of the proposed method compared with other NMF and NTF methods.
- Score: 14.845504084471527
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Nonnegative tensor factorization (NTF) has become an important tool for
feature extraction and part-based representation with preserved intrinsic
structure information from nonnegative high-order data. However, the original
NTF methods utilize Euclidean or Kullback-Leibler divergence as the loss
function which treats each feature equally leading to the neglect of the
side-information of features. To utilize correlation information of features
and manifold information of samples, we introduce Wasserstein manifold
nonnegative tensor factorization (WMNTF), which minimizes the Wasserstein
distance between the distribution of input tensorial data and the distribution
of reconstruction. Although some researches about Wasserstein distance have
been proposed in nonnegative matrix factorization (NMF), they ignore the
spatial structure information of higher-order data. We use Wasserstein distance
(a.k.a Earth Mover's distance or Optimal Transport distance) as a metric and
add a graph regularizer to a latent factor. Experimental results demonstrate
the effectiveness of the proposed method compared with other NMF and NTF
methods.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Coseparable Nonnegative Tensor Factorization With T-CUR Decomposition [2.013220890731494]
Nonnegative Matrix Factorization (NMF) is an important unsupervised learning method to extract meaningful features from data.
In this work, we provide an alternating selection method to select the coseparable core.
The results demonstrate the efficiency of coseparable NTF when compared to coseparable NMF.
arXiv Detail & Related papers (2024-01-30T09:22:37Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - Mutual Wasserstein Discrepancy Minimization for Sequential
Recommendation [82.0801585843835]
We propose a novel self-supervised learning framework based on Mutual WasserStein discrepancy minimization MStein for the sequential recommendation.
We also propose a novel contrastive learning loss based on Wasserstein Discrepancy Measurement.
arXiv Detail & Related papers (2023-01-28T13:38:48Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - Robust Manifold Nonnegative Tucker Factorization for Tensor Data
Representation [44.845291873747335]
Nonnegative Tucker Factorization (NTF) minimizes the euclidean distance or Kullback-Leibler divergence between the original data and its low-rank approximation.
NTF suffers from rotational ambiguity, whose solutions with and without rotation transformations are equally in the sense of yielding the maximum likelihood.
We propose three Robust Manifold NTF algorithms to handle outliers by incorporating structural knowledge about the outliers.
arXiv Detail & Related papers (2022-11-08T01:16:21Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Hard-label Manifolds: Unexpected Advantages of Query Efficiency for
Finding On-manifold Adversarial Examples [67.23103682776049]
Recent zeroth order hard-label attacks on image classification models have shown comparable performance to their first-order, gradient-level alternatives.
It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors.
We propose an information-theoretic argument based on a noisy manifold distance oracle, which leaks manifold information through the adversary's gradient estimate.
arXiv Detail & Related papers (2021-03-04T20:53:06Z) - SWIFT: Scalable Wasserstein Factorization for Sparse Nonnegative Tensors [42.154795547748165]
We introduce SWIFT, which minimizes the Wasserstein distance that measures the distance between the input tensor and that of the reconstruction.
SWIFT achieves up to 9.65% and 11.31% relative improvement over baselines for downstream prediction tasks.
arXiv Detail & Related papers (2020-10-08T16:05:59Z) - Improving Nonparametric Density Estimation with Tensor Decompositions [14.917420021212912]
Nonparametric density estimators often perform well on low dimensional data, but suffer when applied to higher dimensional data.
This paper investigates whether these improvements can be extended to other simplified dependence assumptions.
We prove that restricting estimation to low-rank nonnegative PARAFAC or Tucker decompositions removes the dimensionality exponent on bin width rates for multidimensional histograms.
arXiv Detail & Related papers (2020-10-06T01:39:09Z) - Low-rank Characteristic Tensor Density Estimation Part I: Foundations [38.05393186002834]
We propose a novel approach that builds upon tensor factorization tools.
In order to circumvent the curse of dimensionality, we introduce a low-rank model of this characteristic tensor.
We demonstrate the very promising performance of the proposed method using several measured datasets.
arXiv Detail & Related papers (2020-08-27T18:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.