Studying the Effects of Collaboration in Interactive Theme Discovery Systems
- URL: http://arxiv.org/abs/2408.09030v1
- Date: Fri, 16 Aug 2024 21:57:23 GMT
- Title: Studying the Effects of Collaboration in Interactive Theme Discovery Systems
- Authors: Alvin Po-Chun Chen, Dananjay Srinivas, Alexandra Barry, Maksim Seniw, Maria Leonor Pacheco,
- Abstract summary: We study the impact of synchronous vs. asynchronous collaboration using two different NLP-assisted qualitative research tools.
We present a comprehensive analysis of significant differences in the consistency, cohesiveness, and correctness of their outputs.
- Score: 44.14387961657699
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: NLP-assisted solutions have gained considerable traction to support qualitative data analysis. However, there does not exist a unified evaluation framework that can account for the many different settings in which qualitative researchers may employ them. In this paper, we take a first step in this direction by proposing an evaluation framework to study the way in which different tools may result in different outcomes depending on the collaboration strategy employed. Specifically, we study the impact of synchronous vs. asynchronous collaboration using two different NLP-assisted qualitative research tools and present a comprehensive analysis of significant differences in the consistency, cohesiveness, and correctness of their outputs.
Related papers
- Assessing Teamwork Dynamics in Software Development Projects [2.823770863747379]
This study investigates teamwork dynamics in student software development projects through a mixed-method approach.
We analyzed individual contributions across six project phases, comparing self-reported and actual contributions to measure discrepancies.
Findings reveal that teams with minimal contribution discrepancies achieved higher project grades and exam pass rates.
arXiv Detail & Related papers (2025-01-21T08:23:46Z) - Evaluating Ensemble Methods for News Recommender Systems [50.90330146667386]
This paper demonstrates how ensemble methods can be used to combine many diverse state-of-the-art algorithms to achieve superior results on the Microsoft News dataset (MIND)
Our findings demonstrate that a combination of NRS algorithms can outperform individual algorithms, provided that the base learners are sufficiently diverse.
arXiv Detail & Related papers (2024-06-23T13:40:50Z) - Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - From Variability to Stability: Advancing RecSys Benchmarking Practices [3.3331198926331784]
This paper introduces a novel benchmarking methodology to facilitate a fair and robust comparison of RecSys algorithms.
By utilizing a diverse set of $30$ open datasets, including two introduced in this work, we critically examine the influence of dataset characteristics on algorithm performance.
arXiv Detail & Related papers (2024-02-15T07:35:52Z) - On Task Performance and Model Calibration with Supervised and
Self-Ensembled In-Context Learning [71.44986275228747]
In-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs)
However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration)
arXiv Detail & Related papers (2023-12-21T11:55:10Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Interactive Dimensionality Reduction for Comparative Analysis [28.52130400665133]
We introduce an interactive DR framework where we integrate our new DR method, called ULCA, with an interactive visual interface.
ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks.
We develop an optimization algorithm that enables analysts to interactively refine ULCA results.
arXiv Detail & Related papers (2021-06-29T15:05:36Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Systematic Ensemble Model Selection Approach for Educational Data Mining [8.26773636337474]
This work explores and analyzing two different datasets at two separate stages of course delivery.
It proposes a systematic approach based on Gini index and p-value to select a suitable ensemble learner from a combination of six potential machine learning algorithms.
Experimental results show that the proposed ensemble models achieve high accuracy and low false positive rate at all stages for both datasets.
arXiv Detail & Related papers (2020-05-13T22:25:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.