Time-Constrained Learning
- URL: http://arxiv.org/abs/2202.01913v1
- Date: Fri, 4 Feb 2022 00:15:01 GMT
- Title: Time-Constrained Learning
- Authors: Sergio Filho, Eduardo Laber, Pedro Lazera, Marco Molinaro
- Abstract summary: We present an experimental study involving 5 different learners and 20 datasets.
We show that TCT consistently outperforms two other algorithms.
While our work is primarily practical, we also show that a stripped-down version of TCT has provable guarantees.
- Score: 3.9093825078189006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Consider a scenario in which we have a huge labeled dataset ${\cal D}$ and a
limited time to train some given learner using ${\cal D}$. Since we may not be
able to use the whole dataset, how should we proceed? Questions of this nature
motivate the definition of the Time-Constrained Learning Task (TCL): Given a
dataset ${\cal D}$ sampled from an unknown distribution $\mu$, a learner ${\cal
L}$ and a time limit $T$, the goal is to obtain in at most $T$ units of time
the classification model with highest possible accuracy w.r.t. to $\mu$, among
those that can be built by ${\cal L}$ using the dataset ${\cal D}$.
We propose TCT, an algorithm for the TCL task designed based that on
principles from Machine Teaching. We present an experimental study involving 5
different Learners and 20 datasets where we show that TCT consistently
outperforms two other algorithms: the first is a Teacher for black-box learners
proposed in [Dasgupta et al., ICML 19] and the second is a natural adaptation
of random sampling for the TCL setting. We also compare TCT with Stochastic
Gradient Descent training -- our method is again consistently better.
While our work is primarily practical, we also show that a stripped-down
version of TCT has provable guarantees. Under reasonable assumptions, the time
our algorithm takes to achieve a certain accuracy is never much bigger than the
time it takes the batch teacher (which sends a single batch of examples) to
achieve similar accuracy, and in some case it is almost exponentially better.
Related papers
- Better Rates for Random Task Orderings in Continual Linear Models [50.11453013647086]
We analyze the forgetting, i.e., loss on previously seen tasks, after $k$ iterations.
We develop novel last-iterate bounds in the realizable least squares setup, and apply them to derive new results for continual learning.
We prove for the first time that randomization alone, with no task repetition, can prevent catastrophic forgetting in sufficiently long task.
arXiv Detail & Related papers (2025-04-06T18:39:45Z) - A Scalable Algorithm for Active Learning [3.046981426681863]
FIRAL is a proposed deterministic active learning algorithm for multiclass classification using logistic regression.
We propose an approximate algorithm with storage requirements reduced to $mathcalO(n(d+c) + cd2)$ and a computational complexity of $mathcalO(bn2)$.
We demonstrate the accuracy and scalability of our approach using MNIST, CIFAR-10, Caltech101, and ImageNet.
arXiv Detail & Related papers (2024-09-11T16:34:01Z) - Testable Learning with Distribution Shift [9.036777309376697]
We define a new model called testable learning with distribution shift.
We obtain provably efficient algorithms for certifying the performance of a classifier on a test distribution.
We give several positive results for learning concept classes such as halfspaces, intersections of halfspaces, and decision trees.
arXiv Detail & Related papers (2023-11-25T23:57:45Z) - A Quadratic Synchronization Rule for Distributed Deep Learning [66.68264684667562]
This work proposes a theory-grounded method for determining $H$, named the Quadratic Synchronization Rule (QSR)
Experiments on ResNet and ViT show that local gradient methods with QSR consistently improve the test accuracy over other synchronization strategies.
arXiv Detail & Related papers (2023-10-22T21:38:57Z) - Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing [8.723136784230906]
We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets.
Our objective is to minimize the cumulative deviation of the generated parameters $thetai(t)_t=0T$ across all $T$ iterations.
By leveraging symmetries within the regret-optimal algorithm, we develop a nearly regret $_optimal that runs with $mathcalO(Np2)$ fewer elementary operations.
arXiv Detail & Related papers (2023-09-08T19:17:03Z) - A Spectral Approach to Item Response Theory [6.5268245109828005]
We propose a emphnew item estimation algorithm for the Rasch model.
The core of our algorithm is the computation of the stationary distribution of a Markov chain defined on an item-item graph.
Experiments on synthetic and real-life datasets show that our algorithm is scalable, accurate, and competitive with the most commonly used methods in the literature.
arXiv Detail & Related papers (2022-10-09T18:57:08Z) - Blessing of Class Diversity in Pre-training [54.335530406959435]
We prove that when the classes of the pre-training task are sufficiently diverse, pre-training can significantly improve the sample efficiency of downstream tasks.
Our proof relies on a vector-form Rademacher complexity chain rule for composite function classes and a modified self-concordance condition.
arXiv Detail & Related papers (2022-09-07T20:10:12Z) - Supervised Training of Conditional Monge Maps [107.78770597815242]
Optimal transport (OT) theory describes general principles to define and select, among many possible choices, the most efficient way to map a probability measure onto another.
We introduce CondOT, a multi-task approach to estimate a family of OT maps conditioned on a context variable.
We demonstrate the ability of CondOT to infer the effect of an arbitrary combination of genetic or therapeutic perturbations on single cells.
arXiv Detail & Related papers (2022-06-28T19:34:44Z) - FriendlyCore: Practical Differentially Private Aggregation [67.04951703461657]
We propose a simple and practical tool $mathsfFriendlyCore$ that takes a set of points $cal D$ from an unrestricted (pseudo) metric space as input.
When $cal D$ has effective diameter $r$, $mathsfFriendlyCore$ returns a "stable" subset $cal D_Gsubseteq cal D$ that includes all points.
$mathsfFriendlyCore$ can be used to preprocess the input before privately aggregating it, potentially simplifying the aggregation or boosting its accuracy
arXiv Detail & Related papers (2021-10-19T17:43:50Z) - On the Power of Multitask Representation Learning in Linear MDP [61.58929164172968]
This paper presents analyses for the statistical benefit of multitask representation learning in linear Markov Decision Process (MDP)
We first discover a emphLeast-Activated-Feature-Abundance (LAFA) criterion, denoted as $kappa$, with which we prove that a straightforward least-square algorithm learns a policy which is $tildeO(H2sqrtfrackappa mathcalC(Phi)2 kappa dNT+frackappa dn)
arXiv Detail & Related papers (2021-06-15T11:21:06Z) - List-Decodable Mean Estimation in Nearly-PCA Time [50.79691056481693]
We study the fundamental task of list-decodable mean estimation in high dimensions.
Our algorithm runs in time $widetildeO(ndk)$ for all $k = O(sqrtd) cup Omega(d)$, where $n$ is the size of the dataset.
A variant of our algorithm has runtime $widetildeO(ndk)$ for all $k$, at the expense of an $O(sqrtlog k)$ factor in the recovery guarantee
arXiv Detail & Related papers (2020-11-19T17:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.