Partially Observed Dynamic Tensor Response Regression
- URL: http://arxiv.org/abs/2002.09735v3
- Date: Thu, 13 May 2021 23:30:56 GMT
- Title: Partially Observed Dynamic Tensor Response Regression
- Authors: Jie Zhou and Will Wei Sun and Jingfei Zhang and Lexin Li
- Abstract summary: In modern data science, dynamic tensor data is prevailing in numerous applications.
We develop a regression model with partially observed dynamic tensor sparsity as a predictor.
We illustrate the efficacy of our proposed method using simulations, and two real applications.
- Score: 17.930417764563106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In modern data science, dynamic tensor data is prevailing in numerous
applications. An important task is to characterize the relationship between
such dynamic tensor and external covariates. However, the tensor data is often
only partially observed, rendering many existing methods inapplicable. In this
article, we develop a regression model with partially observed dynamic tensor
as the response and external covariates as the predictor. We introduce the
low-rank, sparsity and fusion structures on the regression coefficient tensor,
and consider a loss function projected over the observed entries. We develop an
efficient non-convex alternating updating algorithm, and derive the
finite-sample error bound of the actual estimator from each step of our
optimization algorithm. Unobserved entries in tensor response have imposed
serious challenges. As a result, our proposal differs considerably in terms of
estimation algorithm, regularity conditions, as well as theoretical properties,
compared to the existing tensor completion or tensor response regression
solutions. We illustrate the efficacy of our proposed method using simulations,
and two real applications, a neuroimaging dementia study and a digital
advertising study.
Related papers
- Factor Augmented Tensor-on-Tensor Neural Networks [3.0040661953201475]
We propose a Factor Augmented-on-Tensor Neural Network (FATTNN) that integrates tensor factor models into deep neural networks.
We show that our proposed algorithms achieve substantial increases in prediction accuracy and significant reductions in computational time.
arXiv Detail & Related papers (2024-05-30T01:56:49Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - Handling The Non-Smooth Challenge in Tensor SVD: A Multi-Objective Tensor Recovery Framework [15.16222081389267]
We introduce a novel tensor recovery model with a learnable tensor nuclear norm to address the challenge of non-smooth changes in tensor data.
We develop a new optimization algorithm named the Alternating Proximal Multiplier Method (APMM) to iteratively solve the proposed tensor completion model.
In addition, we propose a multi-objective tensor recovery framework based on APMM to efficiently explore the correlations of tensor data across its various dimensions.
arXiv Detail & Related papers (2023-11-23T12:16:33Z) - Provable Tensor Completion with Graph Information [49.08648842312456]
We introduce a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem.
We develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor.
In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm.
arXiv Detail & Related papers (2023-10-04T02:55:10Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.