Budget Learning via Bracketing
- URL: http://arxiv.org/abs/2004.06298v1
- Date: Tue, 14 Apr 2020 04:38:14 GMT
- Title: Budget Learning via Bracketing
- Authors: Aditya Gangrade, Durmus Alp Emre Acar, Venkatesh Saligrama
- Abstract summary: The budget learning problem poses the learner's goal as minimising use of the cloud while suffering no discernible loss in accuracy.
We propose a new formulation for the BL problem via the concept of bracketings.
We empirically validate our theory on real-world datasets, demonstrating improved performance over prior gating based methods.
- Score: 50.085728094234476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional machine learning applications in the mobile/IoT setting transmit
data to a cloud-server for predictions. Due to cost considerations (power,
latency, monetary), it is desirable to minimise device-to-server transmissions.
The budget learning (BL) problem poses the learner's goal as minimising use of
the cloud while suffering no discernible loss in accuracy, under the constraint
that the methods employed be edge-implementable.
We propose a new formulation for the BL problem via the concept of
bracketings. Concretely, we propose to sandwich the cloud's prediction, $g,$
via functions $h^-, h^+$ from a `simple' class so that $h^- \le g \le h^+$
nearly always. On an instance $x$, if $h^+(x)=h^-(x)$, we leverage local
processing, and bypass the cloud. We explore theoretical aspects of this
formulation, providing PAC-style learnability definitions; associating the
notion of budget learnability to approximability via brackets; and giving
VC-theoretic analyses of their properties. We empirically validate our theory
on real-world datasets, demonstrating improved performance over prior gating
based methods.
Related papers
- Contextual Linear Optimization with Bandit Feedback [35.692428244561626]
Contextual linear optimization (CLO) uses predictive contextual features to reduce uncertainty in random cost coefficients.
We study a class of offline learning algorithms for CLO with bandit feedback.
We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate.
arXiv Detail & Related papers (2024-05-26T13:27:27Z) - Fast and Simple Explainability for Point Cloud Networks [4.013156524547072]
We propose a fast and simple explainable AI (XAI) method for point cloud data.
It computes pointwise importance with respect to a trained network downstream task.
arXiv Detail & Related papers (2024-03-12T14:51:23Z) - Towards Understanding and Improving GFlowNet Training [71.85707593318297]
We introduce an efficient evaluation strategy to compare the learned sampling distribution to the target reward distribution.
We propose prioritized replay training of high-reward $x$, relative edge flow policy parametrization, and a novel guided trajectory balance objective.
arXiv Detail & Related papers (2023-05-11T22:50:41Z) - Chaos is a Ladder: A New Theoretical Understanding of Contrastive
Learning via Augmentation Overlap [64.60460828425502]
We propose a new guarantee on the downstream performance of contrastive learning.
Our new theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations.
We propose an unsupervised model selection metric ARC that aligns well with downstream accuracy.
arXiv Detail & Related papers (2022-03-25T05:36:26Z) - The Fundamental Price of Secure Aggregation in Differentially Private
Federated Learning [34.630300910399036]
We characterize the fundamental communication cost required to obtain the best accuracy under $varepsilon$ central DP.
Our results show that $tildeOleft( min(n2varepsilon2, d) right)$ bits per client are both sufficient and necessary.
This provides a significant improvement relative to state-of-the-art SecAgg distributed DP schemes.
arXiv Detail & Related papers (2022-03-07T22:56:09Z) - Provably Efficient Reinforcement Learning for Discounted MDPs with
Feature Mapping [99.59319332864129]
In this paper, we study reinforcement learning for discounted Decision (MDP)
We propose a novel algorithm that makes use of the feature mapping and obtains a $tilde O(dsqrtT/ (1-gamma)2)$ regret.
Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $ (1-gamma)-0.5$ factor.
arXiv Detail & Related papers (2020-06-23T17:08:54Z) - The Information Bottleneck Problem and Its Applications in Machine
Learning [53.57797720793437]
Inference capabilities of machine learning systems skyrocketed in recent years, now playing a pivotal role in various aspect of society.
The information bottleneck (IB) theory emerged as a bold information-theoretic paradigm for analyzing deep learning (DL) systems.
In this tutorial we survey the information-theoretic origins of this abstract principle, and its recent impact on DL.
arXiv Detail & Related papers (2020-04-30T16:48:51Z) - Towards Deep Learning Models Resistant to Large Perturbations [0.0]
Adversarial robustness has proven to be a required property of machine learning algorithms.
We show that the well-established algorithm called "adversarial training" fails to train a deep neural network given a large, but reasonable, perturbation magnitude.
arXiv Detail & Related papers (2020-03-30T12:03:09Z) - A Theory of Usable Information Under Computational Constraints [103.5901638681034]
We propose a new framework for reasoning about information in complex systems.
Our foundation is based on a variational extension of Shannon's information theory.
We show that by incorporating computational constraints, $mathcalV$-information can be reliably estimated from data.
arXiv Detail & Related papers (2020-02-25T06:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.