OpenSync: An opensource platform for synchronizing multiple measures in
neuroscience experiments
- URL: http://arxiv.org/abs/2107.14367v1
- Date: Thu, 29 Jul 2021 23:09:55 GMT
- Title: OpenSync: An opensource platform for synchronizing multiple measures in
neuroscience experiments
- Authors: Moein Razavi, Vahid Janfaza, Takashi Yamauchi, Anton Leontyev, Shanle
Longmire-Monford, Joseph Orr
- Abstract summary: This paper introduces an opensource platform named OpenSync, which can be used to synchronize multiple measures in neuroscience experiments.
This platform helps to automatically integrate, synchronize and record physiological measures (e.g., electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, body motion, etc.), user input response (e.g., from mouse, keyboard, joystick, etc.), and task-related information (stimulus markers)
Our experimental results show that the OpenSync platform is able to synchronize multiple measures with microsecond resolution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: The human mind is multimodal. Yet most behavioral studies rely on
century-old measures such as task accuracy and latency. To create a better
understanding of human behavior and brain functionality, we should introduce
other measures and analyze behavior from various aspects. However, it is
technically complex and costly to design and implement the experiments that
record multiple measures. To address this issue, a platform that allows
synchronizing multiple measures from human behavior is needed. Method: This
paper introduces an opensource platform named OpenSync, which can be used to
synchronize multiple measures in neuroscience experiments. This platform helps
to automatically integrate, synchronize and record physiological measures
(e.g., electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking,
body motion, etc.), user input response (e.g., from mouse, keyboard, joystick,
etc.), and task-related information (stimulus markers). In this paper, we
explain the structure and details of OpenSync, provide two case studies in
PsychoPy and Unity. Comparison with existing tools: Unlike proprietary systems
(e.g., iMotions), OpenSync is free and it can be used inside any opensource
experiment design software (e.g., PsychoPy, OpenSesame, Unity, etc.,
https://pypi.org/project/OpenSync/ and
https://github.com/moeinrazavi/OpenSync_Unity). Results: Our experimental
results show that the OpenSync platform is able to synchronize multiple
measures with microsecond resolution.
Related papers
- OpenOmni: A Collaborative Open Source Tool for Building Future-Ready Multimodal Conversational Agents [11.928422245125985]
Open Omni is an open-source, end-to-end pipeline benchmarking tool.
It integrates advanced technologies such as Speech-to-Text, Emotion Detection, Retrieval Augmented Generation, Large Language Models.
It supports local and cloud deployment, ensuring data privacy and supporting latency and accuracy benchmarking.
arXiv Detail & Related papers (2024-08-06T09:02:53Z) - Synchformer: Efficient Synchronization from Sparse Cues [100.89656994681934]
Our contributions include a novel audio-visual synchronization model, and training that decouples extraction from synchronization modelling.
This approach achieves state-of-the-art performance in both dense and sparse settings.
We also extend synchronization model training to AudioSet a million-scale 'in-the-wild' dataset, investigate evidence attribution techniques for interpretability, and explore a new capability for synchronization models: audio-visual synchronizability.
arXiv Detail & Related papers (2024-01-29T18:59:55Z) - GestSync: Determining who is speaking without a talking head [67.75387744442727]
We introduce Gesture-Sync: determining if a person's gestures are correlated with their speech or not.
In comparison to Lip-Sync, Gesture-Sync is far more challenging as there is a far looser relationship between the voice and body movement.
We show that the model can be trained using self-supervised learning alone, and evaluate its performance on the LRS3 dataset.
arXiv Detail & Related papers (2023-10-08T22:48:30Z) - Sync+Sync: A Covert Channel Built on fsync with Storage [2.800768893804362]
We build a covert channel named Sync+Sync for persistent storage.
Sync+Sync delivers a transmission bandwidth of 20,000 bits per second at an error rate of about 0.40% with an ordinary solid-state drive.
We launch side-channel attacks with Sync+Sync and manage to precisely detect operations of a victim database.
arXiv Detail & Related papers (2023-09-14T12:22:29Z) - StyleSync: High-Fidelity Generalized and Personalized Lip Sync in
Style-based Generator [85.40502725367506]
We propose StyleSync, an effective framework that enables high-fidelity lip synchronization.
Specifically, we design a mask-guided spatial information encoding module that preserves the details of the given face.
Our design also enables personalized lip-sync by introducing style space and generator refinement on only limited frames.
arXiv Detail & Related papers (2023-05-09T13:38:13Z) - Synthcity: facilitating innovative use cases of synthetic data in
different data modalities [86.52703093858631]
Synthcity is an open-source software package for innovative use cases of synthetic data in ML fairness, privacy and augmentation.
Synthcity provides the practitioners with a single access point to cutting edge research and tools in synthetic data.
arXiv Detail & Related papers (2023-01-18T14:49:54Z) - Dyadic Movement Synchrony Estimation Under Privacy-preserving Conditions [7.053333608725945]
This paper proposes an ensemble method for movement synchrony estimation under privacy-preserving conditions.
Our method relies entirely on publicly shareable, identity-agnostic secondary data, such as skeleton data and optical flow.
We validate our method on two datasets: (1) PT13 dataset collected from autism therapy interventions and (2) TASD-2 dataset collected from synchronized diving competitions.
arXiv Detail & Related papers (2022-08-01T18:59:05Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - Echo-SyncNet: Self-supervised Cardiac View Synchronization in
Echocardiography [11.407910072022018]
We propose Echo-Sync-Net, a self-supervised learning framework to synchronize various cross-of-care 2D echo series without any external input.
We show promising results for synchronizing Apical 2 chamber and Apical 4 chamber cardiac views.
We also show the usefulness of the learned representations in a one-shot learning scenario of cardiac detection.
arXiv Detail & Related papers (2021-02-03T20:48:16Z) - Single-Frame based Deep View Synchronization for Unsynchronized
Multi-Camera Surveillance [56.964614522968226]
Multi-camera surveillance has been an active research topic for understanding and modeling scenes.
It is usually assumed that the cameras are all temporally synchronized when designing models for these multi-camera based tasks.
Our view synchronization models are applied to different DNNs-based multi-camera vision tasks under the unsynchronized setting.
arXiv Detail & Related papers (2020-07-08T04:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.