Multiresolution Tensor Learning for Efficient and Interpretable Spatial
Analysis
- URL: http://arxiv.org/abs/2002.05578v5
- Date: Fri, 14 Aug 2020 23:34:16 GMT
- Title: Multiresolution Tensor Learning for Efficient and Interpretable Spatial
Analysis
- Authors: Jung Yeon Park, Kenneth Theo Carr, Stephan Zheng, Yisong Yue, and Rose
Yu
- Abstract summary: We develop a novel Multiresolution Learning (MRTL) algorithm for efficiently learning interpretable spatial patterns.
When applied to two real-world datasets, MRTL demonstrates 45x speedup compared to a fixed resolution approach.
- Score: 44.89716235936401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient and interpretable spatial analysis is crucial in many fields such
as geology, sports, and climate science. Tensor latent factor models can
describe higher-order correlations for spatial data. However, they are
computationally expensive to train and are sensitive to initialization, leading
to spatially incoherent, uninterpretable results. We develop a novel
Multiresolution Tensor Learning (MRTL) algorithm for efficiently learning
interpretable spatial patterns. MRTL initializes the latent factors from an
approximate full-rank tensor model for improved interpretability and
progressively learns from a coarse resolution to the fine resolution to reduce
computation. We also prove the theoretical convergence and computational
complexity of MRTL. When applied to two real-world datasets, MRTL demonstrates
4~5x speedup compared to a fixed resolution approach while yielding accurate
and interpretable latent factors.
Related papers
- Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics [21.55547541297847]
We study the problem of learning multi-index models in high-dimensions using a two-layer neural network trained with the mean-field Langevin algorithm.
Under mild distributional assumptions, we characterize the effective dimension $d_mathrmeff$ that controls both sample and computational complexity.
arXiv Detail & Related papers (2024-08-14T02:13:35Z) - Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Hierarchical Neural Operator Transformer with Learnable Frequency-aware Loss Prior for Arbitrary-scale Super-resolution [13.298472586395276]
We present an arbitrary-scale super-resolution (SR) method to enhance the resolution of scientific data.
We conduct extensive experiments on diverse datasets from different domains.
arXiv Detail & Related papers (2024-05-20T17:39:29Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Estimating Koopman operators with sketching to provably learn large
scale dynamical systems [37.18243295790146]
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems.
We boost the efficiency of different kernel-based Koopman operator estimators using random projections.
We establish non error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency.
arXiv Detail & Related papers (2023-06-07T15:30:03Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient
Descent [8.714458129632158]
Kolmogorov model (KM) is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables.
We propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with enhanced gradient descent (GD) method.
It is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds $80%$.
arXiv Detail & Related papers (2021-07-11T10:33:02Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.