Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery
- URL: http://arxiv.org/abs/2212.00262v1
- Date: Thu, 1 Dec 2022 04:00:38 GMT
- Title: Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery
- Authors: Yisi Luo, Xile Zhao, Zhemin Li, Michael K. Ng, Deyu Meng
- Abstract summary: Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
- Score: 52.21846313876592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since higher-order tensors are naturally suitable for representing
multi-dimensional data in real-world, e.g., color images and videos, low-rank
tensor representation has become one of the emerging areas in machine learning
and computer vision. However, classical low-rank tensor representations can
only represent data on finite meshgrid due to their intrinsical discrete
nature, which hinders their potential applicability in many scenarios beyond
meshgrid. To break this barrier, we propose a low-rank tensor function
representation (LRTFR), which can continuously represent data beyond meshgrid
with infinite resolution. Specifically, the suggested tensor function, which
maps an arbitrary coordinate to the corresponding value, can continuously
represent data in an infinite real space. Parallel to discrete tensors, we
develop two fundamental concepts for tensor functions, i.e., the tensor
function rank and low-rank tensor function factorization. We theoretically
justify that both low-rank and smooth regularizations are harmoniously unified
in the LRTFR, which leads to high effectiveness and efficiency for data
continuous representation. Extensive multi-dimensional data recovery
applications arising from image processing (image inpainting and denoising),
machine learning (hyperparameter optimization), and computer graphics (point
cloud upsampling) substantiate the superiority and versatility of our method as
compared with state-of-the-art methods. Especially, the experiments beyond the
original meshgrid resolution (hyperparameter optimization) or even beyond
meshgrid (point cloud upsampling) validate the favorable performances of our
method for continuous representation.
Related papers
- Irregular Tensor Low-Rank Representation for Hyperspectral Image Representation [71.69331824668954]
Low-rank tensor representation is an important approach to alleviate spectral variations.
Previous low-rank representation methods can only be applied to the regular data cubes.
We propose a novel irregular lowrank representation method that can efficiently model the irregular 3D cubes.
arXiv Detail & Related papers (2024-10-24T02:56:22Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Power of $\ell_1$-Norm Regularized Kaczmarz Algorithms for High-Order Tensor Recovery [8.812294191190896]
We propose novel Kaczmarz algorithms for recovering high-order tensors characterized by sparse and/or low-rank structures.
A variety of numerical experiments on both synthetic and real-world datasets demonstrate the effectiveness and significant potential of the proposed methods.
arXiv Detail & Related papers (2024-05-14T02:06:53Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z) - Distributed Non-Negative Tensor Train Decomposition [3.2264685979617655]
High-dimensional data is presented as multidimensional arrays, aka tensors.
The presence of latent (not directly observable) structures in the tensor allows a unique representation and compression of the data.
We introduce a distributed non-negative tensor-train and demonstrate its scalability and the compression on synthetic and real-world big datasets.
arXiv Detail & Related papers (2020-08-04T05:35:57Z) - Anomaly Detection with Tensor Networks [2.3895981099137535]
We exploit the memory and computational efficiency of tensor networks to learn a linear transformation over a space with a dimension exponential in the number of original features.
We produce competitive results on image datasets, despite not exploiting the locality of images.
arXiv Detail & Related papers (2020-06-03T20:41:30Z) - A Solution for Large Scale Nonlinear Regression with High Rank and
Degree at Constant Memory Complexity via Latent Tensor Reconstruction [0.0]
This paper proposes a novel method for learning highly nonlinear, multivariate functions from examples.
Our method takes advantage of the property that continuous functions can be approximated by bys, which in turn are representable by tensors.
For learning the models, we present an efficient-based algorithm that can be implemented in linear time.
arXiv Detail & Related papers (2020-05-04T14:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.