Optimizing Generative AI's Accuracy and Transparency in Inductive Thematic Analysis: A Human-AI Comparison
- URL: http://arxiv.org/abs/2503.16485v2
- Date: Mon, 24 Mar 2025 01:57:01 GMT
- Title: Optimizing Generative AI's Accuracy and Transparency in Inductive Thematic Analysis: A Human-AI Comparison
- Authors: Matthew Nyaaba, Min SungEun, Mary Abiswin Apam, Kwame Owoahene Acheampong, Emmanuel Dwamena,
- Abstract summary: This study highlights the transparency and accuracy of GenAI's inductive thematic analysis.<n>It was developed using GPT-4 Turbo API integrated within a stepwise prompt-based Python script.
- Score: 0.4766245315836212
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study highlights the transparency and accuracy of GenAI's inductive thematic analysis, particularly using GPT-4 Turbo API integrated within a stepwise prompt-based Python script. This approach ensured a traceable and systematic coding process, generating codes with supporting statements and page references, which enhanced validation and reproducibility. The results indicate that GenAI performs inductive coding in a manner closely resembling human coders, effectively categorizing themes at a level like the average human coder. However, in interpretation, GenAI extends beyond human coders by situating themes within a broader conceptual context, providing a more generalized and abstract perspective.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Detecting AI-Generated Text in Educational Content: Leveraging Machine Learning and Explainable AI for Academic Integrity [1.1137087573421256]
This study seeks to enhance academic integrity by providing tools to detect AI-generated content in student work.<n>We evaluate various machine learning (ML) and deep learning (DL) algorithms on the CyberHumanAI dataset.<n>Our proposed model achieved approximately 77.5% accuracy compared to GPTZero's 48.5% accuracy when tasked to classify Pure AI, Pure Human, and mixed class.
arXiv Detail & Related papers (2025-01-06T18:34:20Z) - Found in Translation: semantic approaches for enhancing AI interpretability in face verification [0.4222205362654437]
This study extends previous work by integrating semantic concepts into XAI frameworks to bridge the comprehension gap between model outputs and human understanding.<n>We propose a novel approach combining global and local explanations, using semantic features defined by user-selected facial landmarks.<n>Results indicate that our semantic-based approach, particularly the most detailed set, offers a more nuanced understanding of model decisions than traditional methods.
arXiv Detail & Related papers (2025-01-06T08:34:53Z) - Prompts Matter: Comparing ML/GAI Approaches for Generating Inductive Qualitative Coding Results [39.96179530555875]
generative AI (GAI) tools rely on instructions to work, and how to instruct it may matter.
This study applied two known and two theory-informed novel approaches to an online community dataset and evaluated the resulting coding results.
Our findings show significant discrepancies between ML/GAI approaches and demonstrate the advantage of our approaches.
arXiv Detail & Related papers (2024-11-10T00:23:55Z) - Advancing GenAI Assisted Programming--A Comparative Study on Prompt
Efficiency and Code Quality Between GPT-4 and GLM-4 [5.986648786111719]
This study explores the best practices for utilizing GenAI as a programming tool.
By evaluating prompting strategies at different levels of complexity, we identify that simplest and straightforward prompting strategy yields best code generation results.
Our results reveal that while GPT-4 marginally outperforms GLM-4, the difference is minimal for average users.
arXiv Detail & Related papers (2024-02-20T07:47:39Z) - Assessing the Promise and Pitfalls of ChatGPT for Automated Code
Generation [2.0400340435492272]
This paper presents a comprehensive evaluation of the code generation capabilities of ChatGPT, a prominent large language model.
A dataset of 131 code-generation prompts across 5 categories was curated to enable robust analysis.
Code solutions were generated by both ChatGPT and humans for all prompts, resulting in 262 code samples.
arXiv Detail & Related papers (2023-11-05T12:56:40Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Tram: A Token-level Retrieval-augmented Mechanism for Source Code Summarization [76.57699934689468]
We propose a fine-grained Token-level retrieval-augmented mechanism (Tram) on the decoder side to enhance the performance of neural models.
To overcome the challenge of token-level retrieval in capturing contextual code semantics, we also propose integrating code semantics into individual summary tokens.
arXiv Detail & Related papers (2023-05-18T16:02:04Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z) - Hierarchical Sketch Induction for Paraphrase Generation [79.87892048285819]
We introduce Hierarchical Refinement Quantized Variational Autoencoders (HRQ-VAE), a method for learning decompositions of dense encodings.
We use HRQ-VAE to encode the syntactic form of an input sentence as a path through the hierarchy, allowing us to more easily predict syntactic sketches at test time.
arXiv Detail & Related papers (2022-03-07T15:28:36Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.