An open-source Modular Online Psychophysics Platform (MOPP)
- URL: http://arxiv.org/abs/2505.23137v1
- Date: Thu, 29 May 2025 06:24:36 GMT
- Title: An open-source Modular Online Psychophysics Platform (MOPP)
- Authors: Yuval Samoilov-Kats, Matan Noach, Noam Beer, Yuval Efrati, Adam Zaidel,
- Abstract summary: We introduce an open-source Modular Online Psychophysics Platform (MOPP) to address these challenges.<n>Through the simple web-based interface of MOPP, researchers can build modular experiments, share them with others, and copy or modify tasks from each others environments.<n>MOPP provides built-in features to calibrate for viewing distance and to measure visual acuity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In recent years, there is a growing need and opportunity to use online platforms for psychophysics research. Online experiments make it possible to evaluate large and diverse populations remotely and quickly, complementing laboratory-based research. However, developing and running online psychophysics experiments poses several challenges: i) a high barrier-to-entry for researchers who often need to learn complex code-based platforms, ii) an uncontrolled experimental environment, and iii) questionable credibility of the participants. Here, we introduce an open-source Modular Online Psychophysics Platform (MOPP) to address these challenges. Through the simple web-based interface of MOPP, researchers can build modular experiments, share them with others, and copy or modify tasks from each others environments. MOPP provides built-in features to calibrate for viewing distance and to measure visual acuity. It also includes email-based and IP-based authentication, and reCAPTCHA verification. We developed five example psychophysics tasks, that come preloaded in the environment, and ran a pilot experiment which was hosted on the AWS (Amazon Web Services) cloud. Pilot data collected for these tasks yielded similar results to those reported in laboratory settings. MOPP can thus help researchers collect large psychophysics datasets online, with reduced turnaround time, and in a standardized manner.
Related papers
- From Reproduction to Replication: Evaluating Research Agents with Progressive Code Masking [48.90371827091671]
AutoExperiment is a benchmark that evaluates AI agents' ability to implement and run machine learning experiments.<n>We evaluate state-of-the-art agents and find that performance degrades rapidly as $n$ increases.<n>Our findings highlight critical challenges in long-horizon code generation, context retrieval, and autonomous experiment execution.
arXiv Detail & Related papers (2025-06-24T15:39:20Z) - ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows [82.07367406991678]
Large Language Models (LLMs) have extended their impact beyond Natural Language Processing.<n>Among these, computer-using agents are capable of interacting with operating systems as humans do.<n>We introduce ScienceBoard, which encompasses a realistic, multi-domain environment featuring dynamic and visually rich scientific software.
arXiv Detail & Related papers (2025-05-26T12:27:27Z) - VISION: A Modular AI Assistant for Natural Human-Instrument Interaction at Scientific User Facilities [0.19736111241221438]
generative AI presents an opportunity to bridge this knowledge gap.<n>We present a modular architecture for the Virtual Scientific Companion (VISION)<n>With VISION, we performed LLM-based operation on the beamline workstation with low latency and demonstrated the first voice-controlled experiment at an X-ray scattering beamline.
arXiv Detail & Related papers (2024-12-24T04:37:07Z) - Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System [62.832818186789545]
Virtual Scientists (VirSci) is a multi-agent system designed to mimic the teamwork inherent in scientific research.<n>VirSci organizes a team of agents to collaboratively generate, evaluate, and refine research ideas.<n>We show that this multi-agent approach outperforms the state-of-the-art method in producing novel scientific ideas.
arXiv Detail & Related papers (2024-10-12T07:16:22Z) - DISCOVERYWORLD: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents [49.74065769505137]
We introduce DISCOVERYWORLD, the first virtual environment for developing and benchmarking an agent's ability to perform complete cycles of novel scientific discovery.
It includes 120 different challenge tasks spanning eight topics each with three levels of difficulty and several parametric variations.
We find that strong baseline agents, that perform well in prior published environments, struggle on most DISCOVERYWORLD tasks.
arXiv Detail & Related papers (2024-06-10T20:08:44Z) - MLXP: A Framework for Conducting Replicable Experiments in Python [63.37350735954699]
We propose MLXP, an open-source, simple, and lightweight experiment management tool based on Python.
It streamlines the experimental process with minimal overhead while ensuring a high level of practitioner overhead.
arXiv Detail & Related papers (2024-02-21T14:22:20Z) - GFlowNets for AI-Driven Scientific Discovery [74.27219800878304]
We present a new probabilistic machine learning framework called GFlowNets.
GFlowNets can be applied in the modeling, hypotheses generation and experimental design stages of the experimental science loop.
We argue that GFlowNets can become a valuable tool for AI-driven scientific discovery.
arXiv Detail & Related papers (2023-02-01T17:29:43Z) - SIERRA: A Modular Framework for Research Automation and Reproducibility [6.1678491628787455]
We present SIERRA, a novel framework for accelerating research development and improving results.
SIERRA accelerates research by automating the process of generating executable experiments from queries over independent variables.
It employs a modular architecture enabling easy customization and extension for the needs of individual researchers.
arXiv Detail & Related papers (2022-08-16T15:36:34Z) - SIERRA: A Modular Framework for Research Automation [5.220940151628734]
We present SIERRA, a novel framework for accelerating research developments and improving results.
SIERRA makes it easy to quickly specify the independent variable(s) for an experiment, generate experimental inputs, automatically run the experiment, and process the results to generate deliverables such as graphs and videos.
It employs a deeply modular approach that allows easy customization and extension of automation for the needs of individual researchers.
arXiv Detail & Related papers (2022-03-03T23:45:46Z) - A user-centered approach to designing an experimental laboratory data
platform [0.0]
We take a user-centered approach to understand what essential elements of design and functionality researchers want in an experimental data platform.
We find that having the capability to contextualize rich, complex experimental datasets is the primary user requirement.
arXiv Detail & Related papers (2020-07-28T19:26:28Z) - Empirica: a virtual lab for high-throughput macro-level experiments [4.077787659104315]
Empirica is a modular virtual lab that offers a solution to the usability-functionality trade-off.
Empirica's architecture is designed to allow for parameterizable experimental designs, reusable protocols, and rapid development.
arXiv Detail & Related papers (2020-06-19T21:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.