Benchmarking Processor Performance by Multi-Threaded Machine Learning
Algorithms
- URL: http://arxiv.org/abs/2109.05276v1
- Date: Sat, 11 Sep 2021 13:26:58 GMT
- Title: Benchmarking Processor Performance by Multi-Threaded Machine Learning
Algorithms
- Authors: Muhammad Fahad Saleem
- Abstract summary: In this paper, I will make a performance comparison of multi-threaded machine learning clustering algorithms.
I will be working on Linear Regression, Random Forest, and K-Nearest Neighbors to determine the performance characteristics of the algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning algorithms have enabled computers to predict things by
learning from previous data. The data storage and processing power are
increasing rapidly, thus increasing machine learning and Artificial
intelligence applications. Much of the work is done to improve the accuracy of
the models built in the past, with little research done to determine the
computational costs of machine learning acquisitions. In this paper, I will
proceed with this later research work and will make a performance comparison of
multi-threaded machine learning clustering algorithms. I will be working on
Linear Regression, Random Forest, and K-Nearest Neighbors to determine the
performance characteristics of the algorithms as well as the computation costs
to the obtained results. I will be benchmarking system hardware performance by
running these multi-threaded algorithms to train and test the models on a
dataset to note the differences in performance matrices of the algorithms. In
the end, I will state the best performing algorithms concerning the performance
efficiency of these algorithms on my system.
Related papers
- Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - Accelerating Machine Learning Algorithms with Adaptive Sampling [1.539582851341637]
It is often sufficient to substitute computationally intensive subroutines with a special kind of randomized counterparts that results in almost no degradation in quality.
This thesis demonstrates that it is often sufficient, instead, to substitute computationally intensive subroutines with a special kind of randomized counterparts that results in almost no degradation in quality.
arXiv Detail & Related papers (2023-09-25T15:25:59Z) - A Survey From Distributed Machine Learning to Distributed Deep Learning [0.356008609689971]
Distributed machine learning has been proposed, which involves distributing the data and algorithm across several machines.
We divide these algorithms in classification and clustering (traditional machine learning), deep learning and deep reinforcement learning groups.
Based on the investigation of the mentioned algorithms, we highlighted the limitations that should be addressed in future research.
arXiv Detail & Related papers (2023-07-11T13:06:42Z) - Performance and Energy Consumption of Parallel Machine Learning
Algorithms [0.0]
Machine learning models have achieved remarkable success in various real-world applications.
Model training in machine learning requires large-scale data sets and multiple iterations before it can work properly.
Parallelization of training algorithms is a common strategy to speed up the process of training.
arXiv Detail & Related papers (2023-05-01T13:04:39Z) - RMBench: Benchmarking Deep Reinforcement Learning for Robotic
Manipulator Control [47.61691569074207]
Reinforcement learning is applied to solve actual complex tasks from high-dimensional, sensory inputs.
Recent progress benefits from deep learning for raw sensory signal representation.
We present RMBench, the first benchmark for robotic manipulations.
arXiv Detail & Related papers (2022-10-20T13:34:26Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Efficient and Scalable Acceleration of Online Decision Tree
Learning on FPGA [20.487660974785943]
In the era of big data, traditional decision tree induction algorithms are not suitable for learning large-scale datasets.
We introduce a new quantile-based algorithm to improve the induction of the Hoeffding tree, one of the state-of-the-art online learning models.
We present a high-performance, hardware-efficient and scalable online decision tree learning system on a field-programmable gate array.
arXiv Detail & Related papers (2020-09-03T03:23:43Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - Guidelines for enhancing data locality in selected machine learning
algorithms [0.0]
We analyze one of the means to increase the performances of machine learning algorithms which is exploiting data locality.
Repeated data access can be seen as redundancy in data movement.
This work also identifies some of the opportunities for avoiding these redundancies by directly reusing results.
arXiv Detail & Related papers (2020-01-09T14:16:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.