Closing the Evaluation Gap: Developing a Behavior-Oriented Framework for Assessing Virtual Teamwork Competency
- URL: http://arxiv.org/abs/2504.14531v1
- Date: Sun, 20 Apr 2025 08:12:27 GMT
- Title: Closing the Evaluation Gap: Developing a Behavior-Oriented Framework for Assessing Virtual Teamwork Competency
- Authors: Wenjie Hu, Cecilia Ka Yuk Chan,
- Abstract summary: This study develops a behavior-oriented framework for assessing virtual teamwork competencies among engineering students.<n>Using focus group interviews combined with the Critical Incident Technique, the study identified three key dimensions.<n>The resulting framework provides a foundation for more effective assessment practices.
- Score: 6.169364905804677
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The growing reliance on remote work and digital collaboration has made virtual teamwork competencies essential for professional and academic success. However, the evaluation of such competencies remains a significant challenge. Existing assessment methods, predominantly based on self-reports and peer evaluations, often focus on short-term results or subjective perceptions rather than systematically examining observable teamwork behaviors. These limitations hinder the identification of specific areas for improvement and fail to support meaningful progress in skill development. Informed by group dynamic theory, this study developed a behavior-oriented framework for assessing virtual teamwork competencies among engineering students. Using focus group interviews combined with the Critical Incident Technique, the study identified three key dimensions - Group Task Dimension, Individual Task Dimension and Social Dimension - along with their behavioral indicators and student-perceived relationships between these components. The resulting framework provides a foundation for more effective assessment practices and supports the development of virtual teamwork competency essential for success in increasingly digital and globalized professional environments.
Related papers
- Towards an intelligent assessment system for evaluating the development of algorithmic thinking skills: An exploratory study in Swiss compulsory schools [0.0]
This study aims to develop a comprehensive framework for large-scale assessment of CT skills, particularly focusing on AT, the ability to design algorithms.<n>We first developed a competence model capturing the situated and developmental nature of CT, guiding the design of activities tailored to cognitive abilities, age, and context.<n>We developed an activity for large-scale assessment of AT skills, offered in two variants: one based on non-digital artefacts (unplugged) and manual expert assessment, and the other based on digital artefacts (virtual) and automatic assessment.
arXiv Detail & Related papers (2025-03-27T13:34:36Z) - Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration [51.452664740963066]
Collaborative Gym is a framework enabling asynchronous, tripartite interaction among agents, humans, and task environments.<n>We instantiate Co-Gym with three representative tasks in both simulated and real-world conditions.<n>Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in task performance.
arXiv Detail & Related papers (2024-12-20T09:21:15Z) - Code Collaborate: Dissecting Team Dynamics in First-Semester Programming Students [3.0294711465150006]
The study highlights the collaboration trends that emerge as first-semester students develop a 2D game project.
Results indicate that students often slightly overestimate their contributions, with more engaged individuals more likely to acknowledge mistakes.
Team performance shows no significant variation based on nationality or gender composition, though teams that disbanded frequently consisted of lone wolves.
arXiv Detail & Related papers (2024-10-28T11:42:05Z) - Evaluating Human-AI Collaboration: A Review and Methodological Framework [4.41358655687435]
The use of artificial intelligence (AI) in working environments with individuals, known as Human-AI Collaboration (HAIC), has become essential.<n> evaluating HAIC's effectiveness remains challenging due to the complex interaction of components involved.<n>This paper provides a detailed analysis of existing HAIC evaluation approaches and develops a fresh paradigm for more effectively evaluating these systems.
arXiv Detail & Related papers (2024-07-09T12:52:22Z) - Investigating the Role of Instruction Variety and Task Difficulty in Robotic Manipulation Tasks [50.75902473813379]
This work introduces a comprehensive evaluation framework that systematically examines the role of instructions and inputs in the generalisation abilities of such models.
The proposed framework uncovers the resilience of multimodal models to extreme instruction perturbations and their vulnerability to observational changes.
arXiv Detail & Related papers (2024-07-04T14:36:49Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Towards Explainable Student Group Collaboration Assessment Models Using
Temporal Representations of Individual Student Roles [12.945344702592557]
We propose using simple temporal-CNN deep-learning models to assess student group collaboration.
We check the applicability of dynamically changing feature representations for student group collaboration assessment.
We also use Grad-CAM visualizations to better understand and interpret the important temporal indices that led to the deep-learning model's decision.
arXiv Detail & Related papers (2021-06-17T16:00:08Z) - Hallmarks of Human-Machine Collaboration: A framework for assessment in
the DARPA Communicating with Computers Program [0.851218146348961]
We describe a framework for evaluating systems engaged in open-ended complex scenarios.
We identify the Key Properties that must be exhibited by successful systems.
Hallmarks are intended to serve as goals in guiding research direction.
arXiv Detail & Related papers (2021-02-09T17:13:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.