Online Learning and Disambiguations of Partial Concept Classes
- URL: http://arxiv.org/abs/2303.17578v1
- Date: Thu, 30 Mar 2023 17:46:50 GMT
- Title: Online Learning and Disambiguations of Partial Concept Classes
- Authors: Tsun-Ming Cheung and Hamed Hatami and Pooya Hatami and Kaave Hosseini
- Abstract summary: In a recent article, Alon, Hanneke, Holzman, and Moran introduced a unifying framework to study the learnability of classes of partial concepts.
They showed this is not the case for PAC learning but left the problem open for the stronger notion of online learnability.
We resolve this problem by constructing a class of partial concepts that is online learnable, but no extension of it to a class of total concepts is online learnable.
- Score: 0.7264378254137809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a recent article, Alon, Hanneke, Holzman, and Moran (FOCS '21) introduced
a unifying framework to study the learnability of classes of partial concepts.
One of the central questions studied in their work is whether the learnability
of a partial concept class is always inherited from the learnability of some
``extension'' of it to a total concept class.
They showed this is not the case for PAC learning but left the problem open
for the stronger notion of online learnability.
We resolve this problem by constructing a class of partial concepts that is
online learnable, but no extension of it to a class of total concepts is online
learnable (or even PAC learnable).
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Ramsey Theorems for Trees and a General 'Private Learning Implies Online Learning' Theorem [26.292576184028924]
This work continues to investigate the link between differentially private (DP) and online learning.
We show that for general classification tasks, DP learnability implies online learnability.
arXiv Detail & Related papers (2024-07-10T15:43:30Z) - UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI [50.61495097098296]
We revisit the paradigm in which unlearning is used for Large Language Models (LLMs)
We introduce a concept of ununlearning, where unlearned knowledge gets reintroduced in-context.
We argue that content filtering for impermissible knowledge will be required and even exact unlearning schemes are not enough for effective content regulation.
arXiv Detail & Related papers (2024-06-27T10:24:35Z) - MODL: Multilearner Online Deep Learning [23.86544389731734]
Existing work focuses almost exclusively on exploring pure deep learning solutions.
We propose a different paradigm, based on a hybrid multilearner approach.
We show that this approach achieves state-of-the-art results on common online learning datasets.
arXiv Detail & Related papers (2024-05-28T15:34:33Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - COPEN: Probing Conceptual Knowledge in Pre-trained Language Models [60.10147136876669]
Conceptual knowledge is fundamental to human cognition and knowledge bases.
Existing knowledge probing works only focus on factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge.
We design three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts.
For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark.
arXiv Detail & Related papers (2022-11-08T08:18:06Z) - Multiclass Learnability Beyond the PAC Framework: Universal Rates and
Partial Concept Classes [31.2676304636432]
We study the problem of multiclass classification with a bounded number of different labels $k$, in the realizable setting.
We extend the traditional PAC model to a) distribution-dependent learning rates, and b) learning rates under data-dependent assumptions.
arXiv Detail & Related papers (2022-10-05T14:36:27Z) - A Characterization of Multiclass Learnability [18.38631912121182]
We characterize multiclass PAC learnability through the DS dimension, a dimension defined by Daniely and Shalev-Shwartz 2014.
In the list learning setting, instead of predicting a single outcome for a given unseen input, the goal is to provide a short menu of predictions.
Our second main result concerns the Natarajan dimension, which has been a central candidate for characterizing multiclass learnability.
arXiv Detail & Related papers (2022-03-03T07:41:54Z) - A Theory of PAC Learnability of Partial Concept Classes [30.772106555607458]
We extend the theory of PAC learning in a way which allows to model a rich variety of learning tasks.
We characterize PAC learnability of partial concept classes and reveal an algorithmic landscape which is fundamentally different from the classical one.
arXiv Detail & Related papers (2021-07-18T13:29:26Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Attentional Graph Convolutional Networks for Knowledge Concept
Recommendation in MOOCs in a Heterogeneous View [72.98388321383989]
Massive open online courses ( MOOCs) provide a large-scale and open-access learning opportunity for students to grasp the knowledge.
To attract students' interest, the recommendation system is applied by MOOCs providers to recommend courses to students.
We propose an end-to-end graph neural network-based approach calledAttentionalHeterogeneous Graph Convolutional Deep Knowledge Recommender(ACKRec) for knowledge concept recommendation in MOOCs.
arXiv Detail & Related papers (2020-06-23T18:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.