Learning Halfspaces With Membership Queries
- URL: http://arxiv.org/abs/2012.10985v1
- Date: Sun, 20 Dec 2020 18:02:47 GMT
- Title: Learning Halfspaces With Membership Queries
- Authors: Ori Kelner
- Abstract summary: In some cases, active learning can yield an exponential gain in the number of samples the algorithm needs to see.
We show that the algorithm works well in practice, and significantly outperforms uncertainty sampling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning is a subfield of machine learning, in which the learning
algorithm is allowed to choose the data from which it learns. In some cases, it
has been shown that active learning can yield an exponential gain in the number
of samples the algorithm needs to see, in order to reach generalization error
$\leq \epsilon$. In this work we study the problem of learning halfspaces with
membership queries. In the membership query scenario, we allow the learning
algorithm to ask for the label of every sample in the input space. We suggest a
new algorithm for this problem, and prove it achieves a near optimal label
complexity in some cases. We also show that the algorithm works well in
practice, and significantly outperforms uncertainty sampling.
Related papers
- LLMs as Probabilistic Minimally Adequate Teachers for DFA Learning [11.037017229299607]
The emergence of intelligence in large language models (LLMs) has inspired investigations into their integration into automata learning.
This paper introduces the probabilistic Minimally Adequate Teacher (pMAT) formulation.
We develop techniques to improve answer accuracy and ensure the correctness of the learned automata.
arXiv Detail & Related papers (2024-08-06T07:12:09Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Learning Hidden Markov Models Using Conditional Samples [72.20944611510198]
This paper is concerned with the computational complexity of learning the Hidden Markov Model (HMM)
In this paper, we consider an interactive access model, in which the algorithm can query for samples from the conditional distributions of the HMMs.
Specifically, we obtain efficient algorithms for learning HMMs in settings where we have query access to the exact conditional probabilities.
arXiv Detail & Related papers (2023-02-28T16:53:41Z) - Teaching an Active Learner with Contrastive Examples [35.926575235046634]
We study the problem of active learning with the added twist that the learner is assisted by a helpful teacher.
We investigate an efficient teaching algorithm that adaptively picks contrastive examples.
We derive strong performance guarantees for our algorithm based on two problem-dependent parameters.
arXiv Detail & Related papers (2021-10-28T05:00:55Z) - Provable Lifelong Learning of Representations [21.440845049501778]
We propose a provable lifelong learning algorithm that maintains and refines the internal feature representation.
We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation.
arXiv Detail & Related papers (2021-10-27T00:41:23Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z) - Active Imitation Learning with Noisy Guidance [6.832341432995627]
Imitation learning algorithms provide state-of-the-art results on many structured prediction tasks.
Such algorithms assume training-time access to an expert that can provide the optimal action at any queried state.
We consider an active learning setting in which the learning algorithm has additional access to a much cheaper noisy that provides noisy guidance.
arXiv Detail & Related papers (2020-05-26T15:35:46Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.