Measuring Inclusion in Interaction: Inclusion Analytics for Human-AI Collaborative Learning
- URL: http://arxiv.org/abs/2602.09269v1
- Date: Mon, 09 Feb 2026 23:07:15 GMT
- Title: Measuring Inclusion in Interaction: Inclusion Analytics for Human-AI Collaborative Learning
- Authors: Jaeyoon Choi, Nia Nixon,
- Abstract summary: We introduce inclusion analytics, a discourse-based framework for examining inclusion as a dynamic, interactional process in problem solving.<n>We demonstrate how these constructs can be made analytically visible using scalable, interaction-level measures.<n>This work represents an initial step toward process-oriented approaches to measuring inclusion in human-AI collaborative learning environments.
- Score: 1.0742675209112622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inclusion, equity, and access are widely valued in AI and education, yet are often assessed through coarse sample descriptors or post-hoc self-reports that miss how inclusion is shaped moment by moment in collaborative problem solving (CPS). In this proof-of-concept paper, we introduce inclusion analytics, a discourse-based framework for examining inclusion as a dynamic, interactional process in CPS. We conceptualize inclusion along three complementary dimensions -- participation equity, affective climate, and epistemic equity -- and demonstrate how these constructs can be made analytically visible using scalable, interaction-level measures. Using both simulated conversations and empirical data from human-AI teaming experiments, we illustrate how inclusion analytics can surface patterns of participation, relational dynamics, and idea uptake that remain invisible to aggregate or post-hoc evaluations. This work represents an initial step toward process-oriented approaches to measuring inclusion in human-AI collaborative learning environments.
Related papers
- Using Large Language Models to Detect Socially Shared Regulation of Collaborative Learning [15.567266973412815]
We extend predictive models to automatically detect socially shared regulation of learning behaviors using embedding-based approaches.<n>We leverage large language models (LLMs) as summarization tools to generate task-aware representations of student dialogue aligned with system logs.<n>Results show that text-only embeddings often achieve stronger performance in detecting SSRL behaviors related to enactment or group dynamics.
arXiv Detail & Related papers (2026-01-08T00:30:46Z) - LLM-MC-Affect: LLM-Based Monte Carlo Modeling of Affective Trajectories and Latent Ambiguity for Interpersonal Dynamic Insight [1.1119672724275114]
Emotional coordination is a core property of human interaction that shapes how meaning is constructed in real time.<n>We introduce a probabilistic framework that characterizes emotion not as a static label, but as a continuous latent probability distribution.<n>This work establishes a scalable and deployable pathway for understanding interpersonal dynamics, offering a generalizable solution.
arXiv Detail & Related papers (2026-01-07T06:50:41Z) - Evaluating Cognitive-Behavioral Fixation via Multimodal User Viewing Patterns on Social Media [52.313084466769375]
We propose a novel framework for assessing cognitive-behavioral fixation by analyzing users' multimodal social media engagement patterns.<n> Experiments on existing benchmarks and a newly curated multimodal dataset demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2025-09-05T05:50:00Z) - Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Towards interactive evaluations for interaction harms in human-AI systems [8.989911701384788]
We propose a shift towards evaluation based on textitinteractional ethics, which focuses on textitinteraction harms<n>First, we discuss the limitations of current evaluation methods, which (1) are static, (2) assume a universal user experience, and (3) have limited construct validity.<n>We present practical principles for designing interactive evaluations. These include ecologically valid interaction scenarios, human impact metrics, and diverse human participation approaches.
arXiv Detail & Related papers (2024-05-17T08:49:34Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [51.26815896167173]
We present a comprehensive tertiary analysis of PAMI reviews along three complementary dimensions.<n>Our analyses reveal distinctive organizational patterns as well as persistent gaps in current review practices.<n>Finally, our evaluation of state-of-the-art AI-generated reviews indicates encouraging advances in coherence and organization.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Harnessing Transparent Learning Analytics for Individualized Support
through Auto-detection of Engagement in Face-to-Face Collaborative Learning [3.0184625301151833]
This paper proposes a transparent approach to automatically detect student's individual engagement in the process of collaboration.
The proposed approach can reflect student's individual engagement and can be used as an indicator to distinguish students with different collaborative learning challenges.
arXiv Detail & Related papers (2024-01-03T12:20:28Z) - Predicting the long-term collective behaviour of fish pairs with deep learning [52.83927369492564]
This study introduces a deep learning model to assess social interactions in the fish species Hemigrammus rhodostomus.
We compare the results of our deep learning approach to experiments and to the results of a state-of-the-art analytical model.
We demonstrate that machine learning models social interactions can directly compete with their analytical counterparts in subtle experimental observables.
arXiv Detail & Related papers (2023-02-14T05:25:03Z) - An Artificial Intelligence driven Learning Analytics Method to Examine
the Collaborative Problem solving Process from a Complex Adaptive Systems
Perspective [0.7450115015150832]
Collaborative problem solving (CPS) enables student groups to complete learning tasks, construct knowledge, and solve problems.
Previous research has argued the importance to examine the complexity of CPS, including its multimodality, dynamics, and synergy.
This research collected multimodal process and performance data to understand the nature of CPS in online interaction settings.
arXiv Detail & Related papers (2022-10-28T11:13:05Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.