Benchopt: Reproducible, efficient and collaborative optimization
benchmarks
- URL: http://arxiv.org/abs/2206.13424v2
- Date: Tue, 28 Jun 2022 09:02:57 GMT
- Title: Benchopt: Reproducible, efficient and collaborative optimization
benchmarks
- Authors: Thomas Moreau, Mathurin Massias, Alexandre Gramfort, Pierre Ablin,
Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagr\'eou, Tom Dupr\'e la
Tour, Ghislain Durif, Cassio F. Dantas, Quentin Klopfenstein, Johan Larsson,
En Lai, Tanguy Lefort, Benoit Mal\'ezieux, Badr Moufad, Binh T. Nguyen, Alain
Rakotomamonjy, Zaccharie Ramzi, Joseph Salmon, Samuel Vaiter
- Abstract summary: Benchopt is a framework to automate, reproduce and publish optimization benchmarks in machine learning.
Benchopt simplifies benchmarking for the community by providing an off-the-shelf tool for running, sharing and extending experiments.
- Score: 67.29240500171532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerical validation is at the core of machine learning research as it allows
to assess the actual impact of new methods, and to confirm the agreement
between theory and practice. Yet, the rapid development of the field poses
several challenges: researchers are confronted with a profusion of methods to
compare, limited transparency and consensus on best practices, as well as
tedious re-implementation work. As a result, validation is often very partial,
which can lead to wrong conclusions that slow down the progress of research. We
propose Benchopt, a collaborative framework to automate, reproduce and publish
optimization benchmarks in machine learning across programming languages and
hardware architectures. Benchopt simplifies benchmarking for the community by
providing an off-the-shelf tool for running, sharing and extending experiments.
To demonstrate its broad usability, we showcase benchmarks on three standard
learning tasks: $\ell_2$-regularized logistic regression, Lasso, and ResNet18
training for image classification. These benchmarks highlight key practical
findings that give a more nuanced view of the state-of-the-art for these
problems, showing that for practical evaluation, the devil is in the details.
We hope that Benchopt will foster collaborative work in the community hence
improving the reproducibility of research findings.
Related papers
- RepoMasterEval: Evaluating Code Completion via Real-World Repositories [12.176098357240095]
RepoMasterEval is a novel benchmark for evaluating code completion models constructed from real-world Python and TypeScript repositories.
To improve test accuracy of model generated code, we employ mutation testing to measure the effectiveness of the test cases.
Our empirical evaluation on 6 state-of-the-art models shows that test argumentation is critical in improving the accuracy of the benchmark.
arXiv Detail & Related papers (2024-08-07T03:06:57Z) - Position: Benchmarking is Limited in Reinforcement Learning Research [33.596940437995904]
This work investigates the sources of increased computation costs in rigorous experiment designs.
We argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.
arXiv Detail & Related papers (2024-06-23T23:36:26Z) - When is an Embedding Model More Promising than Another? [33.540506562970776]
Embedders play a central role in machine learning, projecting any object into numerical representations that can be leveraged to perform various downstream tasks.
The evaluation of embedding models typically depends on domain-specific empirical approaches.
We present a unified approach to evaluate embedders, drawing upon the concepts of sufficiency and informativeness.
arXiv Detail & Related papers (2024-06-11T18:13:46Z) - Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models [95.96734086126469]
Large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications.
For the wide application of LLMs, the inference efficiency is an essential concern, which has been widely studied in existing work.
We perform a detailed coarse-to-fine analysis of the inference performance of various code libraries.
arXiv Detail & Related papers (2024-04-17T15:57:50Z) - Re-Benchmarking Pool-Based Active Learning for Binary Classification [27.034593234956713]
Active learning is a paradigm that significantly enhances the performance of machine learning models when acquiring labeled data.
While several benchmarks exist for evaluating active learning strategies, their findings exhibit some misalignment.
This discrepancy motivates us to develop a transparent and reproducible benchmark for the community.
arXiv Detail & Related papers (2023-06-15T08:47:50Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Planning for Sample Efficient Imitation Learning [52.44953015011569]
Current imitation algorithms struggle to achieve high performance and high in-environment sample efficiency simultaneously.
We propose EfficientImitate, a planning-based imitation learning method that can achieve high in-environment sample efficiency and performance simultaneously.
Experimental results show that EI achieves state-of-the-art results in performance and sample efficiency.
arXiv Detail & Related papers (2022-10-18T05:19:26Z) - Building an Efficient and Effective Retrieval-based Dialogue System via
Mutual Learning [27.04857039060308]
We propose to combine the best of both worlds to build a retrieval system.
We employ a fast bi-encoder to replace the traditional feature-based pre-retrieval model.
We train the pre-retrieval model and the re-ranking model at the same time via mutual learning.
arXiv Detail & Related papers (2021-10-01T01:32:33Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.