HyperNTF: A Hypergraph Regularized Nonnegative Tensor Factorization for
Dimensionality Reduction
- URL: http://arxiv.org/abs/2101.06827v2
- Date: Tue, 26 Jan 2021 16:08:29 GMT
- Title: HyperNTF: A Hypergraph Regularized Nonnegative Tensor Factorization for
Dimensionality Reduction
- Authors: Wanguang Yin, Zhengming Ma, Quanying Liu
- Abstract summary: We propose a novel method, called Hypergraph Regularized Nonnegative Factorization (HyperNTF)
HyperNTF can preserve nonnegativity in tensor factorization, and uncover the higher-order relationship among the nearest neighborhoods.
Experiments show that HyperNTF robustly outperforms state-of-the-art algorithms in clustering analysis.
- Score: 2.1485350418225244
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Most methods for dimensionality reduction are based on either tensor
representation or local geometry learning. However, the tensor-based methods
severely rely on the assumption of global and multilinear structures in
high-dimensional data; and the manifold learning methods suffer from the
out-of-sample problem. In this paper, bridging the tensor decomposition and
manifold learning, we propose a novel method, called Hypergraph Regularized
Nonnegative Tensor Factorization (HyperNTF). HyperNTF can preserve
nonnegativity in tensor factorization, and uncover the higher-order
relationship among the nearest neighborhoods. Clustering analysis with HyperNTF
has low computation and storage costs. The experiments on four synthetic data
show a desirable property of hypergraph in uncovering the high-order
correlation to unfold the curved manifolds. Moreover, the numerical experiments
on six real datasets suggest that HyperNTF robustly outperforms
state-of-the-art algorithms in clustering analysis.
Related papers
- Irregular Tensor Low-Rank Representation for Hyperspectral Image Representation [71.69331824668954]
Low-rank tensor representation is an important approach to alleviate spectral variations.
Previous low-rank representation methods can only be applied to the regular data cubes.
We propose a novel irregular lowrank representation method that can efficiently model the irregular 3D cubes.
arXiv Detail & Related papers (2024-10-24T02:56:22Z) - High-Dimensional Tensor Discriminant Analysis with Incomplete Tensors [5.745276598549783]
We introduce a novel approach to tensor classification with incomplete data, framed within high-dimensional linear discriminant analysis.
Our method demonstrates excellent performance in simulations and real data analysis, even with significant proportions of missing data.
arXiv Detail & Related papers (2024-10-18T18:00:16Z) - Scalable tensor methods for nonuniform hypergraphs [0.18434042562191813]
A recently proposed adjacency tensor is applicable to nonuniform hypergraphs, but is prohibitively costly to form and analyze in practice.
We develop tensor times same vector (TTSV) algorithms which improve complexity from $O(nr)$ to a low-degree in $r$.
We demonstrate the flexibility and utility of our approach in practice by developing tensor-based hypergraph centrality and clustering algorithms.
arXiv Detail & Related papers (2023-06-30T17:41:58Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - Fast Learnings of Coupled Nonnegative Tensor Decomposition Using Optimal Gradient and Low-rank Approximation [7.265645216663691]
We introduce a novel coupled nonnegative CANDECOMP/PARAFAC decomposition algorithm optimized by the alternating gradient method (CoNCPD-APG)
By integrating low-rank approximation with the proposed CoNCPD-APG method, the proposed algorithm can significantly decrease the computational burden without compromising decomposition quality.
arXiv Detail & Related papers (2023-02-10T08:49:36Z) - Equivariant Hypergraph Diffusion Neural Operators [81.32770440890303]
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs provide a promising way to model higher-order relations in data.
This work proposes a new HNN architecture named ED-HNN, which provably represents any continuous equivariant hypergraph diffusion operators.
We evaluate ED-HNN for node classification on nine real-world hypergraph datasets.
arXiv Detail & Related papers (2022-07-14T06:17:00Z) - High-Order Multilinear Discriminant Analysis via Order-$\textit{n}$
Tensor Eigendecomposition [0.0]
This paper presents a new approach to tensor-based multilinear discriminant analysis referred to as High-Order Multilinear Discriminant Analysis (HOMLDA)
Our proposed approach provides improved classification performance with respect to the current Tucker decomposition-based supervised learning methods.
arXiv Detail & Related papers (2022-05-18T19:49:54Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.