Energy Efficiency Considerations for Popular AI Benchmarks
- URL: http://arxiv.org/abs/2304.08359v1
- Date: Mon, 17 Apr 2023 15:18:15 GMT
- Title: Energy Efficiency Considerations for Popular AI Benchmarks
- Authors: Raphael Fischer and Matthias Jakobs and Katharina Morik
- Abstract summary: We provide insights for popular AI benchmarks, with a total of 100 experiments.
Our findings are evidence of how different data sets all have their own efficiency landscape, and show that methods can be more or less likely to act efficiently.
- Score: 4.991046902153724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in artificial intelligence need to become more resource-aware and
sustainable. This requires clear assessment and reporting of energy efficiency
trade-offs, like sacrificing fast running time for higher predictive
performance. While first methods for investigating efficiency have been
proposed, we still lack comprehensive results for popular methods and data
sets. In this work, we attempt to fill this information gap by providing
empiric insights for popular AI benchmarks, with a total of 100 experiments.
Our findings are evidence of how different data sets all have their own
efficiency landscape, and show that methods can be more or less likely to act
efficiently.
Related papers
- KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [75.78948575957081]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Green Recommender Systems: Optimizing Dataset Size for Energy-Efficient Algorithm Performance [0.10241134756773229]
This paper investigates the potential for energy-efficient algorithm performance by optimizing dataset sizes.
We conducted experiments on the MovieLens 100K, 1M, 10M, and Amazon Toys and Games datasets.
arXiv Detail & Related papers (2024-10-12T04:00:55Z) - A Closer Look at Data Augmentation Strategies for Finetuning-Based Low/Few-Shot Object Detection [5.434078645728145]
This paper examines both model performance and energy efficiency of custom data augmentations and automated data augmentation selection strategies.
It is shown that in many cases, the performance gains of data augmentation strategies are overshadowed by their increased energy usage.
arXiv Detail & Related papers (2024-08-20T15:29:56Z) - Efficient-Empathy: Towards Efficient and Effective Selection of Empathy Data [32.483540066357]
We present Efficient-Empathy, a sensibility and rationality score-based data selection algorithm.
Our trained sensibility model achieves efficiently state-of-the-art (SoTA) performance.
By integrating sensibility and rationality data with a MoE structure, we achieve even higher performance.
arXiv Detail & Related papers (2024-07-02T04:11:52Z) - Efficient Methods for Natural Language Processing: A Survey [76.34572727185896]
This survey synthesizes and relates current methods and findings in efficient NLP.
We aim to provide both guidance for conducting NLP under limited resources, and point towards promising research directions for developing more efficient methods.
arXiv Detail & Related papers (2022-08-31T20:32:35Z) - Data-Centric Green AI: An Exploratory Empirical Study [6.4265933507484]
We investigate the impact of data-centric approaches on AI energy efficiency.
Our results show evidence that, by exclusively conducting modifications on datasets, energy consumption can be drastically reduced.
Our results call for a research agenda that focuses on data-centric techniques to further enable and democratize Green AI.
arXiv Detail & Related papers (2022-04-06T12:22:43Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Unsupervised Learning of slow features for Data Efficient Regression [15.73372211126635]
We propose the slow variational autoencoder (S-VAE), an extension to the $beta$-VAE which applies a temporal similarity constraint to the latent representations.
We evaluate the three methods against their data-efficiency on down-stream tasks using a synthetic 2D ball tracking dataset, a dataset from a reinforcent learning environment and a dataset generated using the DeepMind Lab environment.
arXiv Detail & Related papers (2020-12-11T12:19:45Z) - Finding Action Tubes with a Sparse-to-Dense Framework [62.60742627484788]
We propose a framework that generates action tube proposals from video streams with a single forward pass in a sparse-to-dense manner.
We evaluate the efficacy of our model on the UCF101-24, JHMDB-21 and UCFSports benchmark datasets.
arXiv Detail & Related papers (2020-08-30T15:38:44Z) - HULK: An Energy Efficiency Benchmark Platform for Responsible Natural
Language Processing [76.38975568873765]
We introduce HULK, a multi-task energy efficiency benchmarking platform for responsible natural language processing.
We compare pretrained models' energy efficiency from the perspectives of time and cost.
arXiv Detail & Related papers (2020-02-14T01:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.