Performance and Power Modeling and Prediction Using MuMMI and Ten
Machine Learning Methods
- URL: http://arxiv.org/abs/2011.06655v1
- Date: Thu, 12 Nov 2020 21:24:11 GMT
- Title: Performance and Power Modeling and Prediction Using MuMMI and Ten
Machine Learning Methods
- Authors: Xingfu Wu, Valerie Taylor, and Zhiling Lan
- Abstract summary: We use modeling and prediction tool MuMMI and ten machine learning methods to model and predict performance and power.
Experiment results show that the prediction error rates in performance and power using MuMMI are less than 10% for most cases.
- Score: 0.13764085113103217
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we use modeling and prediction tool MuMMI (Multiple Metrics
Modeling Infrastructure) and ten machine learning methods to model and predict
performance and power and compare their prediction error rates. We use a
fault-tolerant linear algebra code and a fault-tolerant heat distribution code
to conduct our modeling and prediction study on the Cray XC40 Theta and IBM
BG/Q Mira at Argonne National Laboratory and the Intel Haswell cluster Shepard
at Sandia National Laboratories. Our experiment results show that the
prediction error rates in performance and power using MuMMI are less than 10%
for most cases. Based on the models for runtime, node power, CPU power, and
memory power, we identify the most significant performance counters for
potential optimization efforts associated with the application characteristics
and the target architectures, and we predict theoretical outcomes of the
potential optimizations. When we compare the prediction accuracy using MuMMI
with that using 10 machine learning methods, we observe that MuMMI not only
results in more accurate prediction in both performance and power but also
presents how performance counters impact the performance and power models. This
provides some insights about how to fine-tune the applications and/or systems
for energy efficiency.
Related papers
- CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding [62.075029712357]
This work introduces the Cognitive Diffusion Probabilistic Models (CogDPM)
CogDPM features a precision estimation method based on the hierarchical sampling capabilities of diffusion models and weight the guidance with precision weights estimated by the inherent property of diffusion models.
We apply CogDPM to real-world prediction tasks using the United Kindom precipitation and surface wind datasets.
arXiv Detail & Related papers (2024-05-03T15:54:50Z) - Insight Gained from Migrating a Machine Learning Model to Intelligence Processing Units [8.782847610934635]
Intelligence Processing Units (IPUs) offer a viable accelerator alternative to GPUs for machine learning (ML) applications.
We investigate the process of migrating a model from GPU to IPU and explore several optimization techniques, including pipelining and gradient accumulation.
We observe significantly improved performance with the Bow IPU when compared to its predecessor, the Colossus IPU.
arXiv Detail & Related papers (2024-04-16T17:02:52Z) - Impact of data usage for forecasting on performance of model predictive
control in buildings with smart energy storage [0.0]
This study investigates the performance of both simple and state-of-the-art machine learning prediction models for Model Predictive Control.
The impact of data usage on forecast accuracy is quantified for the following data efficiency measures.
The use of more than 2 years of training data for load prediction models provided no significant improvement in forecast accuracy.
arXiv Detail & Related papers (2024-02-19T21:01:11Z) - ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce a training-free extreme value enhancement strategy named ExEnsemble, which increases the variance of pixel values and improves the forecast robustness.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - DECODE: Data-driven Energy Consumption Prediction leveraging Historical
Data and Environmental Factors in Buildings [1.2891210250935148]
This paper introduces a Long Short-Term Memory (LSTM) model designed to forecast building energy consumption.
The LSTM model provides accurate short, medium, and long-term energy predictions for residential and commercial buildings.
It demonstrates exceptional prediction accuracy, boasting the highest R2 score of 0.97 and the most favorable mean absolute error (MAE) of 0.007.
arXiv Detail & Related papers (2023-09-06T11:02:53Z) - Physics-informed linear regression is a competitive approach compared to
Machine Learning methods in building MPC [0.8135412538980287]
We show that control in general leads to satisfactory reductions in heating and cooling energy compared to the building's baseline controller.
We also see that the physics-informed ARMAX models have a lower computational burden, and a superior sample efficiency compared to the Machine Learning based models.
arXiv Detail & Related papers (2021-10-29T16:56:05Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Efficient pre-training objectives for Transformers [84.64393460397471]
We study several efficient pre-training objectives for Transformers-based models.
We prove that eliminating the MASK token and considering the whole output during the loss are essential choices to improve performance.
arXiv Detail & Related papers (2021-04-20T00:09:37Z) - Towards More Fine-grained and Reliable NLP Performance Prediction [85.78131503006193]
We make two contributions to improving performance prediction for NLP tasks.
First, we examine performance predictors for holistic measures of accuracy like F1 or BLEU.
Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration.
arXiv Detail & Related papers (2021-02-10T15:23:20Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.