Functional Bayesian Tucker Decomposition for Continuous-indexed Tensor Data
- URL: http://arxiv.org/abs/2311.04829v2
- Date: Tue, 19 Mar 2024 01:39:19 GMT
- Title: Functional Bayesian Tucker Decomposition for Continuous-indexed Tensor Data
- Authors: Shikai Fang, Xin Yu, Zheng Wang, Shibo Li, Mike Kirby, Shandian Zhe,
- Abstract summary: We propose Functional Bayesian Tucker Decomposition (FunBaT) to handle multi-aspect data.
We use continuous-indexed data as the interaction between the Tucker core and a group of latent functions.
An efficient inference algorithm is developed for posterior scalable approximation based on advanced message-passing techniques.
- Score: 32.19122007191261
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tucker decomposition is a powerful tensor model to handle multi-aspect data. It demonstrates the low-rank property by decomposing the grid-structured data as interactions between a core tensor and a set of object representations (factors). A fundamental assumption of such decomposition is that there are finite objects in each aspect or mode, corresponding to discrete indexes of data entries. However, real-world data is often not naturally posed in this setting. For example, geographic data is represented as continuous indexes of latitude and longitude coordinates, and cannot fit tensor models directly. To generalize Tucker decomposition to such scenarios, we propose Functional Bayesian Tucker Decomposition (FunBaT). We treat the continuous-indexed data as the interaction between the Tucker core and a group of latent functions. We use Gaussian processes (GP) as functional priors to model the latent functions. Then, we convert each GP into a state-space prior by constructing an equivalent stochastic differential equation (SDE) to reduce computational cost. An efficient inference algorithm is developed for scalable posterior approximation based on advanced message-passing techniques. The advantage of our method is shown in both synthetic data and several real-world applications. We release the code of FunBaT at \url{https://github.com/xuangu-fang/Functional-Bayesian-Tucker-Decomposition}.
Related papers
- Functional-Edged Network Modeling [5.858447612884839]
We transform the adjacency matrix into a functional adjacency tensor, introducing an additional dimension dedicated to function representation.
To deal with irregular observations of the functional edges, we conduct model inference to solve a tensor completion problem.
We also derive several theorems to show the desirable properties of the functional edged network model.
arXiv Detail & Related papers (2024-03-30T02:23:01Z) - Fast and interpretable Support Vector Classification based on the truncated ANOVA decomposition [0.0]
Support Vector Machines (SVMs) are an important tool for performing classification on scattered data.
We propose solving SVMs in primal form using feature maps based on trigonometric functions or wavelets.
arXiv Detail & Related papers (2024-02-04T10:27:42Z) - Efficient Nonparametric Tensor Decomposition for Binary and Count Data [27.02813234958821]
We propose ENTED, an underlineEfficient underlineNon underlineTEnsor underlineDecomposition for binary and count tensors.
arXiv Detail & Related papers (2024-01-15T14:27:03Z) - Streaming Factor Trajectory Learning for Temporal Tensor Decomposition [33.18423605559094]
We propose Streaming Factor Trajectory Learning for temporal tensor decomposition.
We use Gaussian processes (GPs) to model the trajectory of factors so as to flexibly estimate their temporal evolution.
We have shown the advantage of SFTL in both synthetic tasks and real-world applications.
arXiv Detail & Related papers (2023-10-25T21:58:52Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Nonparametric Factor Trajectory Learning for Dynamic Tensor
Decomposition [20.55025648415664]
We propose NON FActor Trajectory learning for dynamic tensor decomposition (NONFAT)
We use a second-level GP to sample the entry values and to capture the temporal relationship between the entities.
We have shown the advantage of our method in several real-world applications.
arXiv Detail & Related papers (2022-07-06T05:33:00Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.