Empirica: a virtual lab for high-throughput macro-level experiments
- URL: http://arxiv.org/abs/2006.11398v2
- Date: Wed, 30 Dec 2020 15:57:28 GMT
- Title: Empirica: a virtual lab for high-throughput macro-level experiments
- Authors: Abdullah Almaatouq, Joshua Becker, James P. Houghton, Nicolas Paton,
Duncan J. Watts, Mark E. Whiting
- Abstract summary: Empirica is a modular virtual lab that offers a solution to the usability-functionality trade-off.
Empirica's architecture is designed to allow for parameterizable experimental designs, reusable protocols, and rapid development.
- Score: 4.077787659104315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual labs allow researchers to design high-throughput and macro-level
experiments that are not feasible in traditional in-person physical lab
settings. Despite the increasing popularity of online research, researchers
still face many technical and logistical barriers when designing and deploying
virtual lab experiments. While several platforms exist to facilitate the
development of virtual lab experiments, they typically present researchers with
a stark trade-off between usability and functionality. We introduce Empirica: a
modular virtual lab that offers a solution to the usability-functionality
trade-off by employing a "flexible defaults" design strategy. This strategy
enables us to maintain complete "build anything" flexibility while offering a
development platform that is accessible to novice programmers. Empirica's
architecture is designed to allow for parameterizable experimental designs,
reusable protocols, and rapid development. These features will increase the
accessibility of virtual lab experiments, remove barriers to innovation in
experiment design, and enable rapid progress in the understanding of
distributed human computation.
Related papers
- Honegumi: An Interface for Accelerating the Adoption of Bayesian Optimization in the Experimental Sciences [0.0]
We introduce Honegumi, a user-friendly, interactive tool designed to simplify the process of creating advanced Bayesian optimization scripts.
Honegumi offers a dynamic selection grid that allows users to configure key parameters of their optimization tasks, generating ready-to-use, unit-tested Python scripts.
Accompanying the interface is a comprehensive suite of tutorials that provide both conceptual and practical guidance.
arXiv Detail & Related papers (2025-02-04T23:53:59Z) - VISION: A Modular AI Assistant for Natural Human-Instrument Interaction at Scientific User Facilities [0.19736111241221438]
generative AI presents an opportunity to bridge this knowledge gap.
We present a modular architecture for the Virtual Scientific Companion (VISION)
With VISION, we performed LLM-based operation on the beamline workstation with low latency and demonstrated the first voice-controlled experiment at an X-ray scattering beamline.
arXiv Detail & Related papers (2024-12-24T04:37:07Z) - Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System [62.832818186789545]
Virtual Scientists (VirSci) is a multi-agent system designed to mimic the teamwork inherent in scientific research.
VirSci organizes a team of agents to collaboratively generate, evaluate, and refine research ideas.
We show that this multi-agent approach outperforms the state-of-the-art method in producing novel scientific ideas.
arXiv Detail & Related papers (2024-10-12T07:16:22Z) - DISCOVERYWORLD: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents [49.74065769505137]
We introduce DISCOVERYWORLD, the first virtual environment for developing and benchmarking an agent's ability to perform complete cycles of novel scientific discovery.
It includes 120 different challenge tasks spanning eight topics each with three levels of difficulty and several parametric variations.
We find that strong baseline agents, that perform well in prior published environments, struggle on most DISCOVERYWORLD tasks.
arXiv Detail & Related papers (2024-06-10T20:08:44Z) - MLXP: A Framework for Conducting Replicable Experiments in Python [63.37350735954699]
We propose MLXP, an open-source, simple, and lightweight experiment management tool based on Python.
It streamlines the experimental process with minimal overhead while ensuring a high level of practitioner overhead.
arXiv Detail & Related papers (2024-02-21T14:22:20Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - SIERRA: A Modular Framework for Research Automation [5.220940151628734]
We present SIERRA, a novel framework for accelerating research developments and improving results.
SIERRA makes it easy to quickly specify the independent variable(s) for an experiment, generate experimental inputs, automatically run the experiment, and process the results to generate deliverables such as graphs and videos.
It employs a deeply modular approach that allows easy customization and extension of automation for the needs of individual researchers.
arXiv Detail & Related papers (2022-03-03T23:45:46Z) - Experiments as Code: A Concept for Reproducible, Auditable, Debuggable,
Reusable, & Scalable Experiments [7.557948558412152]
A common concern in experimental research is the auditability and of experiments.
We propose the "Experiments as Code" paradigm, where the whole experiment is not only documented but additionally the automation code is provided.
arXiv Detail & Related papers (2022-02-24T12:15:00Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Integrated Benchmarking and Design for Reproducible and Accessible
Evaluation of Robotic Agents [61.36681529571202]
We describe a new concept for reproducible robotics research that integrates development and benchmarking.
One of the central components of this setup is the Duckietown Autolab, a standardized setup that is itself relatively low-cost and reproducible.
We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs.
arXiv Detail & Related papers (2020-09-09T15:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.