DACBench: A Benchmark Library for Dynamic Algorithm Configuration
- URL: http://arxiv.org/abs/2105.08541v1
- Date: Tue, 18 May 2021 14:16:51 GMT
- Title: DACBench: A Benchmark Library for Dynamic Algorithm Configuration
- Authors: Theresa Eimer, Andr\'e Biedenkapp, Maximilian Reimer, Steven
Adriaensen, Frank Hutter, Marius Lindauer
- Abstract summary: We propose DACBench, a benchmark library that seeks to collect and standardize existing DAC benchmarks from different AI domains.
To show the potential, broad applicability and challenges of DAC, we explore how a set of six initial benchmarks compare in several dimensions of difficulty.
- Score: 30.217571636151295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Algorithm Configuration (DAC) aims to dynamically control a target
algorithm's hyperparameters in order to improve its performance. Several
theoretical and empirical results have demonstrated the benefits of dynamically
controlling hyperparameters in domains like evolutionary computation, AI
Planning or deep learning. Replicating these results, as well as studying new
methods for DAC, however, is difficult since existing benchmarks are often
specialized and incompatible with the same interfaces. To facilitate
benchmarking and thus research on DAC, we propose DACBench, a benchmark library
that seeks to collect and standardize existing DAC benchmarks from different AI
domains, as well as provide a template for new ones. For the design of
DACBench, we focused on important desiderata, such as (i) flexibility, (ii)
reproducibility, (iii) extensibility and (iv) automatic documentation and
visualization. To show the potential, broad applicability and challenges of
DAC, we explore how a set of six initial benchmarks compare in several
dimensions of difficulty.
Related papers
- Binary Code Similarity Detection via Graph Contrastive Learning on Intermediate Representations [52.34030226129628]
Binary Code Similarity Detection (BCSD) plays a crucial role in numerous fields, including vulnerability detection, malware analysis, and code reuse identification.
In this paper, we propose IRBinDiff, which mitigates compilation differences by leveraging LLVM-IR with higher-level semantic abstraction.
Our extensive experiments, conducted under varied compilation settings, demonstrate that IRBinDiff outperforms other leading BCSD methods in both One-to-one comparison and One-to-many search scenarios.
arXiv Detail & Related papers (2024-10-24T09:09:20Z) - Instance Selection for Dynamic Algorithm Configuration with Reinforcement Learning: Improving Generalization [16.49696895887536]
Dynamic Algorithm configuration (DAC) addresses the challenge of dynamically setting hyperparameters of an algorithm for a diverse set of instances.
Agents trained with Deep Reinforcement Learning (RL) offer a pathway to solve such settings.
We take a step towards mitigating this by selecting a representative subset of training instances to overcome overrepresentation and then retraining the agent on this subset to improve its generalization performance.
arXiv Detail & Related papers (2024-07-18T13:44:43Z) - Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models [63.36637269634553]
We present a novel method of further improving performance by requiring models to compare multiple reasoning chains.
We find that instruction tuning on DCoT datasets boosts the performance of even smaller, and therefore more accessible, language models.
arXiv Detail & Related papers (2024-07-03T15:01:18Z) - DACO: Towards Application-Driven and Comprehensive Data Analysis via Code Generation [83.30006900263744]
Data analysis is a crucial analytical process to generate in-depth studies and conclusive insights.
We propose to automatically generate high-quality answer annotations leveraging the code-generation capabilities of LLMs.
Our DACO-RL algorithm is evaluated by human annotators to produce more helpful answers than SFT model in 57.72% cases.
arXiv Detail & Related papers (2024-03-04T22:47:58Z) - Efficient Architecture Search via Bi-level Data Pruning [70.29970746807882]
This work pioneers an exploration into the critical role of dataset characteristics for DARTS bi-level optimization.
We introduce a new progressive data pruning strategy that utilizes supernet prediction dynamics as the metric.
Comprehensive evaluations on the NAS-Bench-201 search space, DARTS search space, and MobileNet-like search space validate that BDP reduces search costs by over 50%.
arXiv Detail & Related papers (2023-12-21T02:48:44Z) - Enhancing Few-shot NER with Prompt Ordering based Data Augmentation [59.69108119752584]
We propose a Prompt Ordering based Data Augmentation (PODA) method to improve the training of unified autoregressive generation frameworks.
Experimental results on three public NER datasets and further analyses demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-05-19T16:25:43Z) - Using Automated Algorithm Configuration for Parameter Control [0.7742297876120562]
Dynamic Algorithm configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion.
We propose a new DAC benchmark the controlling of the key parameter $lambda$ in the $(lambda,lambda)$Genetic Algorithm for solving OneMax problems.
Our approach is able to consistently outperform the default parameter control policy of the benchmark derived from previous theoretical work on sufficiently large problem sizes.
arXiv Detail & Related papers (2023-02-23T20:57:47Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Automated Dynamic Algorithm Configuration [39.39845379026921]
The performance of an algorithm often critically depends on its parameter configuration.
It has been shown that some algorithm parameters are best adjusted dynamically during execution.
A promising alternative is to automatically learn such dynamic parameter adaptation policies from data.
arXiv Detail & Related papers (2022-05-27T10:30:25Z) - HAWKS: Evolving Challenging Benchmark Sets for Cluster Analysis [2.5329716878122404]
Comprehensive benchmarking of clustering algorithms is difficult.
There is no consensus regarding the best practice for rigorous benchmarking.
We demonstrate the important role evolutionary algorithms play to support flexible generation of such benchmarks.
arXiv Detail & Related papers (2021-02-13T15:01:34Z) - Towards Large Scale Automated Algorithm Design by Integrating Modular
Benchmarking Frameworks [0.9281671380673306]
We present a first proof-of-concept use-case that demonstrates the efficiency of the algorithm framework ParadisEO with the automated algorithm configuration tool irace and the experimental platform IOHprofiler.
Key advantages of our pipeline are fast evaluation times, the possibility to generate rich data sets, and a standardized interface that can be used to benchmark very broad classes of sampling-based optimizations.
arXiv Detail & Related papers (2021-02-12T10:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.