Human-AI Collaborative Inductive Thematic Analysis: AI Guided Analysis and Human Interpretive Authority
- URL: http://arxiv.org/abs/2601.11850v1
- Date: Sat, 17 Jan 2026 00:38:36 GMT
- Title: Human-AI Collaborative Inductive Thematic Analysis: AI Guided Analysis and Human Interpretive Authority
- Authors: Matthew Nyaaba, Min SungEun, Mary Abiswin Apam, Kwame Owoahene Acheampong, Emmanuel Dwamena, Xiaoming Zhai,
- Abstract summary: This study examines how researchers interact with an Inductive Thematic Analysis GPT (ITA-GPT)<n>ITA-GPT is a purpose-built AI tool designed to support inductive thematic analysis.<n>The study focuses on analytic process rather than substantive findings.
- Score: 0.750605871308538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing use of generative artificial intelligence (GenAI) in qualitative research raises important questions about analytic practice and interpretive authority. This study examines how researchers interact with an Inductive Thematic Analysis GPT (ITA-GPT), a purpose-built AI tool designed to support inductive thematic analysis through structured, semi-automated prompts aligned with reflexive thematic analysis and verbatim coding principles. Guided by a Human-Artificial Intelligence Collaborative Inductive Thematic Analysis (HACITA) framework, the study focuses on analytic process rather than substantive findings. Three experienced qualitative researchers conducted ITA-GPT assisted analyses of interview transcripts from education research in the Ghanaian teacher education context. The tool supported familiarization, verbatim in vivo coding, gerund-based descriptive coding, and theme development, while enforcing trace to text integrity, coverage checks, and auditability. Data sources included interaction logs, AI-generated tables, researcher revisions, deletions, insertions, comments, and reflexive memos. Findings show that ITA-GPT functioned as a procedural scaffold that structured analytic workflow and enhanced transparency. However, interpretive authority remained with human researchers, who exercised judgment through recurrent analytic actions including modification, deletion, rejection, insertion, and commenting. The study demonstrates how inductive thematic analysis is enacted through responsible human AI collaboration.
Related papers
- The Story is Not the Science: Execution-Grounded Evaluation of Mechanistic Interpretability Research [56.80927148740585]
We address the challenges of scalability and rigor by flipping the dynamic and developing AI agents as research evaluators.<n>We use mechanistic interpretability research as a testbed, build standardized research output, and develop MechEvalAgent.<n>Our work demonstrates the potential of AI agents to transform research evaluation and pave the way for rigorous scientific practices.
arXiv Detail & Related papers (2026-02-05T19:00:02Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - Distinguishing AI-Generated and Human-Written Text Through Psycholinguistic Analysis [0.0]
This research specifically maps 31 distinct stylometric features to cognitive processes such as lexical retrieval, discourse planning, cognitive load management, and metacognitive self-monitoring.<n>This framework contributes to the development of reliable tools aimed at preserving academic integrity in the era of generative AI.
arXiv Detail & Related papers (2025-05-03T12:06:53Z) - Methodological Foundations for AI-Driven Survey Question Generation [41.94295877935867]
This paper presents a methodological framework for using generative AI in educational survey research.<n>We explore how Large Language Models can generate adaptive, context-aware survey questions.<n>We examine ethical issues such as bias, privacy, and transparency.
arXiv Detail & Related papers (2025-05-02T09:50:34Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - STRICTA: Structured Reasoning in Critical Text Assessment for Peer Review and Beyond [68.47402386668846]
We introduce Structured Reasoning In Critical Text Assessment (STRICTA) to model text assessment as an explicit, step-wise reasoning process.<n>STRICTA breaks down the assessment into a graph of interconnected reasoning steps drawing on causality theory.<n>We apply STRICTA to a dataset of over 4000 reasoning steps from roughly 40 biomedical experts on more than 20 papers.
arXiv Detail & Related papers (2024-09-09T06:55:37Z) - Generative AI Tools in Academic Research: Applications and Implications for Qualitative and Quantitative Research Methodologies [0.0]
This study examines the impact of Generative Artificial Intelligence (GenAI) on academic research, focusing on its application to qualitative and quantitative data analysis.
GenAI tools evolve rapidly, they offer new possibilities for enhancing research productivity and democratising complex analytical processes.
Their integration into academic practice raises significant questions regarding research integrity and security, authorship, and the changing nature of scholarly work.
arXiv Detail & Related papers (2024-08-13T13:10:03Z) - Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing [25.572926673827165]
This case study highlights the importance of prompt design, output analysis, and recognizing the AI's limitations to ensure responsible and effective AI integration in scholarly work.
The paper contributes to the field of Human-Computer Interaction by exploring effective prompt strategies and providing a comparative analysis of Gen AI models.
arXiv Detail & Related papers (2024-04-23T19:06:39Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [51.26815896167173]
We present a comprehensive tertiary analysis of PAMI reviews along three complementary dimensions.<n>Our analyses reveal distinctive organizational patterns as well as persistent gaps in current review practices.<n>Finally, our evaluation of state-of-the-art AI-generated reviews indicates encouraging advances in coherence and organization.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Characterising Research Areas in the field of AI [68.8204255655161]
We identified the main conceptual themes by performing clustering analysis on the co-occurrence network of topics.
The results highlight the growing academic interest in research themes like deep learning, machine learning, and internet of things.
arXiv Detail & Related papers (2022-05-26T16:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.