Learning Arithmetic Formulas in the Presence of Noise: A General
Framework and Applications to Unsupervised Learning
- URL: http://arxiv.org/abs/2311.07284v1
- Date: Mon, 13 Nov 2023 12:26:25 GMT
- Title: Learning Arithmetic Formulas in the Presence of Noise: A General
Framework and Applications to Unsupervised Learning
- Authors: Pritam Chandra, Ankit Garg, Neeraj Kayal, Kunal Mittal, Tanmay Sinha
- Abstract summary: We present a framework for designing efficient algorithms for unsupervised learning problems.
Our framework is based on a meta algorithm that learns arithmetic circuits in the presence of noise.
- Score: 4.10375234787249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a general framework for designing efficient algorithms for
unsupervised learning problems, such as mixtures of Gaussians and subspace
clustering. Our framework is based on a meta algorithm that learns arithmetic
circuits in the presence of noise, using lower bounds. This builds upon the
recent work of Garg, Kayal and Saha (FOCS 20), who designed such a framework
for learning arithmetic circuits without any noise. A key ingredient of our
meta algorithm is an efficient algorithm for a novel problem called Robust
Vector Space Decomposition. We show that our meta algorithm works well when
certain matrices have sufficiently large smallest non-zero singular values. We
conjecture that this condition holds for smoothed instances of our problems,
and thus our framework would yield efficient algorithms for these problems in
the smoothed setting.
Related papers
- Resilience-Runtime Tradeoff Relations for Quantum Algorithms [0.7371081631199642]
A leading approach to algorithm design aims to minimize the number of operations in an algorithm's compilation.
We develop a framework to characterize the resilience of an algorithm to perturbative noises.
We show how this framework can be leveraged to identify compilations of an algorithm that are better suited to withstand certain noises.
arXiv Detail & Related papers (2024-08-05T18:31:14Z) - GBM-based Bregman Proximal Algorithms for Constrained Learning [3.667453772837954]
We adapt GBM for constrained learning tasks within the framework of Bregman proximal algorithms.
We introduce a new Bregman method with a global optimality guarantee when the learning objective functions are convex.
We provide substantial experimental evidence to showcase the effectiveness of the Bregman algorithm framework.
arXiv Detail & Related papers (2023-08-21T14:56:51Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Dual Algorithmic Reasoning [9.701208207491879]
We propose to learn algorithms by exploiting duality of the underlying algorithmic problem.
We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning.
We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task.
arXiv Detail & Related papers (2023-02-09T08:46:23Z) - Lifelong Bandit Optimization: No Prior and No Regret [70.94238868711952]
We develop LIBO, an algorithm which adapts to the environment by learning from past experience.
We assume a kernelized structure where the kernel is unknown but shared across all tasks.
Our algorithm can be paired with any kernelized or linear bandit algorithm and guarantees optimal performance.
arXiv Detail & Related papers (2022-10-27T14:48:49Z) - The CLRS Algorithmic Reasoning Benchmark [28.789225199559834]
Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.
We propose the CLRS Algorithmic Reasoning Benchmark, covering classical algorithms from the Introduction to Algorithms textbook.
Our benchmark spans a variety of algorithmic reasoning procedures, including sorting, searching, dynamic programming, graph algorithms, string algorithms and geometric algorithms.
arXiv Detail & Related papers (2022-05-31T09:56:44Z) - Practical, Provably-Correct Interactive Learning in the Realizable
Setting: The Power of True Believers [12.09273192079783]
We consider interactive learning in the realizable setting and develop a general framework to handle problems ranging from best arm identification to active classification.
We design novel computationally efficient algorithms for the realizable setting that match the minimax lower bound up to logarithmic factors.
arXiv Detail & Related papers (2021-11-09T02:33:36Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.