The State of Pilot Study Reporting in Crowdsourcing: A Reflection on
Best Practices and Guidelines
- URL: http://arxiv.org/abs/2312.08090v1
- Date: Wed, 13 Dec 2023 12:13:40 GMT
- Title: The State of Pilot Study Reporting in Crowdsourcing: A Reflection on
Best Practices and Guidelines
- Authors: Jonas Oppenlaender, Tahir Abbas, Ujwal Gadiraju
- Abstract summary: A lack of details surrounding pilot studies in crowdsourcing research hinders the replication of studies and the reproduction of findings.
We conducted a systematic literature review on the current state of pilot study reporting at the intersection of crowdsourcing and HCI research.
We formulate a set of best practice guidelines for reporting crowd pilot studies in crowdsourcing research.
- Score: 12.782288692460565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pilot studies are an essential cornerstone of the design of crowdsourcing
campaigns, yet they are often only mentioned in passing in the scholarly
literature. A lack of details surrounding pilot studies in crowdsourcing
research hinders the replication of studies and the reproduction of findings,
stalling potential scientific advances. We conducted a systematic literature
review on the current state of pilot study reporting at the intersection of
crowdsourcing and HCI research. Our review of ten years of literature included
171 articles published in the proceedings of the Conference on Human
Computation and Crowdsourcing (AAAI HCOMP) and the ACM Digital Library. We
found that pilot studies in crowdsourcing research (i.e., crowd pilot studies)
are often under-reported in the literature. Important details, such as the
number of workers and rewards to workers, are often not reported. On the basis
of our findings, we reflect on the current state of practice and formulate a
set of best practice guidelines for reporting crowd pilot studies in
crowdsourcing research. We also provide implications for the design of
crowdsourcing platforms and make practical suggestions for supporting crowd
pilot study reporting.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models [56.08917291606421]
ResearchAgent is a large language model-powered research idea writing agent.
It generates problems, methods, and experiment designs while iteratively refining them based on scientific literature.
We experimentally validate our ResearchAgent on scientific publications across multiple disciplines.
arXiv Detail & Related papers (2024-04-11T13:36:29Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Insights Towards Better Case Study Reporting in Software Engineering [0.0]
This paper aims to share insights that can enhance the quality and impact of case study reporting.
We emphasize the need to follow established guidelines, accurate classification, and detailed context descriptions in case studies.
We aim to encourage researchers to adopt more rigorous and communicative strategies, ensuring that case studies are methodologically sound.
arXiv Detail & Related papers (2024-02-13T12:29:26Z) - Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - How WEIRD is Usable Privacy and Security Research? (Extended Version) [7.669758543344074]
We conducted a literature review to understand the extent to which participant samples in UPS papers were from WEIRD countries.
Geographic and linguistic barriers in the study methods and recruitment methods may cause researchers to conduct user studies locally.
arXiv Detail & Related papers (2023-05-08T19:21:18Z) - Navigating the reporting guideline environment for computational
pathology: A review [0.685316573653194]
The aim of this work is to highlight resources and reporting guidelines available to researchers working in computational pathology.
Items were compiled to create a summary for easy identification of useful resources and guidance.
Over 70 published resources applicable to pathology AI research were identified.
arXiv Detail & Related papers (2023-01-03T23:17:51Z) - NLPeer: A Unified Resource for the Computational Study of Peer Review [58.71736531356398]
We introduce NLPeer -- the first ethically sourced multidomain corpus of more than 5k papers and 11k review reports from five different venues.
We augment previous peer review datasets to include parsed and structured paper representations, rich metadata and versioning information.
Our work paves the path towards systematic, multi-faceted, evidence-based study of peer review in NLP and beyond.
arXiv Detail & Related papers (2022-11-12T12:29:38Z) - Yes-Yes-Yes: Donation-based Peer Reviewing Data Collection for ACL
Rolling Review and Beyond [58.71736531356398]
We present an in-depth discussion of peer reviewing data, outline the ethical and legal desiderata for peer reviewing data collection, and propose the first continuous, donation-based data collection workflow.
We report on the ongoing implementation of this workflow at the ACL Rolling Review and deliver the first insights obtained with the newly collected data.
arXiv Detail & Related papers (2022-01-27T11:02:43Z) - A Systematic Literature Review of Empiricism and Norms of Reporting in
Computing Education Research Literature [4.339510167603376]
The goal of this study is to characterize the reporting of empiricism in Computing Education Research (CER) literature.
We conducted an SLR of 427 papers published during 2014 and 2015 in five CER venues.
Over 80% of papers had some form of empirical evaluation.
arXiv Detail & Related papers (2021-07-02T16:37:29Z) - Secondary Studies in the Academic Context: A Systematic Mapping and
Survey [4.122293798697967]
The main goal of this study is to provide an overview on the use of secondary studies in an academic context.
We conducted an SM to identify the available and relevant studies on the use of secondary studies as a research methodology for conducting SE research projects.
Secondly, a survey was performed with 64 SE researchers to identify their perception related to the value of performing secondary studies to support their research projects.
arXiv Detail & Related papers (2020-07-10T20:01:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.