Accelerated Nonnegative Tensor Completion via Integer Programming
- URL: http://arxiv.org/abs/2211.15770v1
- Date: Mon, 28 Nov 2022 21:00:25 GMT
- Title: Accelerated Nonnegative Tensor Completion via Integer Programming
- Authors: Wenhao Pan, Anil Aswani and Chen Chen
- Abstract summary: We present an approach to nonnegative tensor completion based on integer programming.
We explore several variants that can maintain the same theoretical guarantees as the algorithm, but offer potentially faster computation.
- Score: 7.3149416054553065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of tensor completion has applications in healthcare, computer
vision, and other domains. However, past approaches to tensor completion have
faced a tension in that they either have polynomial-time computation but
require exponentially more samples than the information-theoretic rate, or they
use fewer samples but require solving NP-hard problems for which there are no
known practical algorithms. A recent approach, based on integer programming,
resolves this tension for nonnegative tensor completion. It achieves the
information-theoretic sample complexity rate and deploys the Blended
Conditional Gradients algorithm, which requires a linear (in numerical
tolerance) number of oracle steps to converge to the global optimum. The
tradeoff in this approach is that, in the worst case, the oracle step requires
solving an integer linear program. Despite this theoretical limitation,
numerical experiments show that this algorithm can, on certain instances, scale
up to 100 million entries while running on a personal computer. The goal of
this paper is to further enhance this algorithm, with the intention to expand
both the breadth and scale of instances that can be solved. We explore several
variants that can maintain the same theoretical guarantees as the algorithm,
but offer potentially faster computation. We consider different data
structures, acceleration of gradient descent steps, and the use of the Blended
Pairwise Conditional Gradients algorithm. We describe the original approach and
these variants, and conduct numerical experiments in order to explore various
tradeoffs in these algorithmic design choices.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Tensor Completion via Integer Optimization [7.813563137863005]
The main challenge with the tensor completion problem is a fundamental tension between computation power and the information-theoretic sample complexity rate.
Past approaches either achieve the information-theoretic rate but lack practical algorithms to compute the corresponding solution.
This paper develops a novel tensor completion algorithm that resolves this tension by achieving both provable convergence (in numerical tolerance) in a linear number of oracle steps and the information-theoretic rate.
arXiv Detail & Related papers (2024-02-06T21:44:07Z) - When can you trust feature selection? -- I: A condition-based analysis
of LASSO and generalised hardness of approximation [49.1574468325115]
We show how no (randomised) algorithm can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input.
For ill-posed inputs, the algorithm runs forever, hence, it will never produce a wrong answer.
For any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer.
arXiv Detail & Related papers (2023-12-18T18:29:01Z) - Quantum Sparse Coding [5.130440339897477]
We develop a quantum-inspired algorithm for sparse coding.
The emergence of quantum computers and Ising machines can potentially lead to more accurate estimations.
We conduct numerical experiments with simulated data on Lightr's quantum-inspired digital platform.
arXiv Detail & Related papers (2022-09-08T13:00:30Z) - On efficient algorithms for computing near-best polynomial
approximations to high-dimensional, Hilbert-valued functions from limited
samples [1.0650780147044159]
Sparse approximation has become indispensable for approxing smooth, high-dimensional functions from limited samples.
This paper introduces algorithms and theoretical guarantees that assert exponential or algebraic rates of convergence, along with robustness to sampling, algorithmic and physical discretization errors.
arXiv Detail & Related papers (2022-03-25T20:56:07Z) - Nonnegative Tensor Completion via Integer Optimization [5.932152752097064]
We prove that our algorithm converges in a linear (in numerical tolerance) number of oracle steps, while achieving the information-theoretic rate.
Because the norm is defined using a 0-1 polytope, this means we can use integer linear programming to solve linear separation problems over the polytope.
arXiv Detail & Related papers (2021-11-08T15:43:19Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Stochastic Optimization for Regularized Wasserstein Estimators [10.194798773447879]
We introduce an algorithm to solve a regularized version of the problem of Wasserstein estimators gradient, with a time per step which is sublinear in the natural dimensions.
We show that this algorithm can be extended to other tasks, including estimation of Wasserstein barycenters.
arXiv Detail & Related papers (2020-02-20T12:04:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.