PyExperimenter: Easily distribute experiments and track results
- URL: http://arxiv.org/abs/2301.06348v2
- Date: Fri, 21 Apr 2023 12:21:47 GMT
- Title: PyExperimenter: Easily distribute experiments and track results
- Authors: Tanja Tornede, Alexander Tornede, Lukas Fehring, Lukas Gehring, Helena
Graf, Jonas Hanselle, Felix Mohr, Marcel Wever
- Abstract summary: PyExperimenter is a tool to facilitate the setup, documentation, execution, and subsequent evaluation of results from an empirical study of algorithms.
It is intended to be used by researchers in the field of artificial intelligence, but is not limited to those.
- Score: 63.871474825689134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: PyExperimenter is a tool to facilitate the setup, documentation, execution,
and subsequent evaluation of results from an empirical study of algorithms and
in particular is designed to reduce the involved manual effort significantly.
It is intended to be used by researchers in the field of artificial
intelligence, but is not limited to those.
Related papers
- Efficient Biological Data Acquisition through Inference Set Design [3.9633147697178996]
In this work, we aim to select the smallest set of candidates in order to achieve some desired level of accuracy for the system as a whole.
We call this mechanism inference set design, and propose the use of an uncertainty-based active learning solution to prune out challenging examples.
arXiv Detail & Related papers (2024-10-25T15:34:03Z) - MLXP: A Framework for Conducting Replicable Experiments in Python [63.37350735954699]
We propose MLXP, an open-source, simple, and lightweight experiment management tool based on Python.
It streamlines the experimental process with minimal overhead while ensuring a high level of practitioner overhead.
arXiv Detail & Related papers (2024-02-21T14:22:20Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - Task-specific experimental design for treatment effect estimation [59.879567967089145]
Large randomised trials (RCTs) are the standard for causal inference.
Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought.
We develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications.
arXiv Detail & Related papers (2023-06-08T18:10:37Z) - Bayesian Q-learning With Imperfect Expert Demonstrations [56.55609745121237]
We propose a novel algorithm to speed up Q-learning with the help of a limited amount of imperfect expert demonstrations.
We evaluate our approach on a sparse-reward chain environment and six more complicated Atari games with delayed rewards.
arXiv Detail & Related papers (2022-10-01T17:38:19Z) - Active Learning-Based Optimization of Scientific Experimental Design [1.9705094859539976]
Active learning (AL) is a machine learning algorithm that can achieve greater accuracy with fewer labeled training instances.
This article performs a retrospective study on a drug response dataset using the proposed AL scheme.
It shows that scientific experimental design, instead of being manually set, can be optimized by AL.
arXiv Detail & Related papers (2021-12-29T20:02:35Z) - Efficient and accurate group testing via Belief Propagation: an
empirical study [5.706360286474043]
Group testing problem asks for efficient pooling schemes and algorithms.
The goal is to accurately identify the infected samples while conducting the least possible number of tests.
We suggest a new test design that significantly increases the accuracy of the results.
arXiv Detail & Related papers (2021-05-13T10:52:46Z) - Predicting Performance for Natural Language Processing Tasks [128.34208911925424]
We build regression models to predict the evaluation score of an NLP experiment given the experimental settings as input.
Experimenting on 9 different NLP tasks, we find that our predictors can produce meaningful predictions over unseen languages and different modeling architectures.
arXiv Detail & Related papers (2020-05-02T16:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.