Learnability with Time-Sharing Computational Resource Concerns
- URL: http://arxiv.org/abs/2305.02217v5
- Date: Sat, 24 Aug 2024 07:42:31 GMT
- Title: Learnability with Time-Sharing Computational Resource Concerns
- Authors: Zhi-Hua Zhou,
- Abstract summary: We present a theoretical framework that takes into account the influence of computational resources in learning theory.
This framework can be naturally applied to stream learning where the incoming data streams can be potentially endless.
It may also provide a theoretical perspective for the design of intelligent supercomputing operating systems.
- Score: 65.268245109828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional theoretical machine learning studies generally assume explicitly or implicitly that there are enough or even infinitely supplied computational resources. In real practice, however, computational resources are usually limited, and the performance of machine learning depends not only on how many data have been received, but also on how many data can be handled subject to computational resources available. Note that most current ``intelligent supercomputing'' facilities work like exclusive operating systems, where a fixed amount of resources are allocated to a machine learning task without adaptive scheduling strategies considering important factors such as the learning performance demands and learning process status. In this article, we introduce the notion of machine learning throughput, define Computational Resource Efficient Learning (CoRE-Learning), and present a theoretical framework that takes into account the influence of computational resources in learning theory. This framework can be naturally applied to stream learning where the incoming data streams can be potentially endless with overwhelming size and it is impractical to assume that all received data can be handled in time. It may also provide a theoretical perspective for the design of intelligent supercomputing operating systems.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Stochastic Learning of Computational Resource Usage as Graph Structured Multimarginal Schrödinger Bridge [1.6111903346958474]
We propose to learn the time-varying computational resource usage of software as a graph structured Schr"odinger bridge problem.
We provide detailed algorithms for learning in both single and multi-core cases, discuss the convergence guarantees, computational complexities, and demonstrate their practical use.
arXiv Detail & Related papers (2024-05-21T02:39:45Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - An Entropy-Based Model for Hierarchical Learning [3.1473798197405944]
A common feature among real-world datasets is that data domains are multiscale.
We propose a learning model that exploits this multiscale data structure.
The hierarchical learning model is inspired by the logical and progressive easy-to-hard learning mechanism of human beings.
arXiv Detail & Related papers (2022-12-30T13:14:46Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Frugal Machine Learning [7.460473725109103]
This paper investigates frugal learning, aimed to build the most accurate possible models using the least amount of resources.
The most promising algorithms are then assessed in a real-world scenario by implementing them in a smartwatch and letting them learn activity recognition models on the watch itself.
arXiv Detail & Related papers (2021-11-05T21:27:55Z) - Adaptive Scheduling for Machine Learning Tasks over Networks [1.4271989597349055]
This paper examines algorithms for efficiently allocating resources to linear regression tasks by exploiting the informativeness of the data.
The algorithms developed enable adaptive scheduling of learning tasks with reliable performance guarantees.
arXiv Detail & Related papers (2021-01-25T10:59:00Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Resource-Efficient Neural Networks for Embedded Systems [23.532396005466627]
We provide an overview of the current state of the art of machine learning techniques.
We focus on resource-efficient inference based on deep neural networks (DNNs), the predominant machine learning models of the past decade.
We substantiate our discussion with experiments on well-known benchmark data sets using compression techniques.
arXiv Detail & Related papers (2020-01-07T14:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.