From G-Factor to A-Factor: Establishing a Psychometric Framework for AI Literacy
- URL: http://arxiv.org/abs/2503.16517v1
- Date: Sun, 16 Mar 2025 14:51:48 GMT
- Title: From G-Factor to A-Factor: Establishing a Psychometric Framework for AI Literacy
- Authors: Ning Li, Wenming Deng, Jiatan Chen,
- Abstract summary: We establish AI literacy as a coherent, measurable construct with significant implications for education, workforce development, and social equity.<n>Study 1 revealed a dominant latent factor - termed the "A-factor" - that accounts for 44.16% of variance across diverse AI interaction tasks.<n>Study 2 refined the measurement tool by examining four key dimensions of AI literacy.<n>Regression analyses identified several significant predictors of AI literacy, including cognitive abilities (IQ), educational background, prior AI experience, and training history.
- Score: 1.5031024722977635
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This research addresses the growing need to measure and understand AI literacy in the context of generative AI technologies. Through three sequential studies involving a total of 517 participants, we establish AI literacy as a coherent, measurable construct with significant implications for education, workforce development, and social equity. Study 1 (N=85) revealed a dominant latent factor - termed the "A-factor" - that accounts for 44.16% of variance across diverse AI interaction tasks. Study 2 (N=286) refined the measurement tool by examining four key dimensions of AI literacy: communication effectiveness, creative idea generation, content evaluation, and step-by-step collaboration, resulting in an 18-item assessment battery. Study 3 (N=146) validated this instrument in a controlled laboratory setting, demonstrating its predictive validity for real-world task performance. Results indicate that AI literacy significantly predicts performance on complex, language-based creative tasks but shows domain specificity in its predictive power. Additionally, regression analyses identified several significant predictors of AI literacy, including cognitive abilities (IQ), educational background, prior AI experience, and training history. The multidimensional nature of AI literacy and its distinct factor structure provide evidence that effective human-AI collaboration requires a combination of general and specialized abilities. These findings contribute to theoretical frameworks of human-AI collaboration while offering practical guidance for developing targeted educational interventions to promote equitable access to the benefits of generative AI technologies.
Related papers
- Synergizing Self-Regulation and Artificial-Intelligence Literacy Towards Future Human-AI Integrative Learning [92.34299949916134]
Self-regulated learning (SRL) and Artificial-Intelligence (AI) literacy are becoming key competencies for successful human-AI interactive learning.
This study analyzed data from 1,704 Chinese undergraduates using clustering methods to uncover four learner groups.
arXiv Detail & Related papers (2025-03-31T13:41:21Z) - AI Literacy in K-12 and Higher Education in the Wake of Generative AI: An Integrative Review [3.5297361401370044]
There is little consensus among researchers and practitioners on how to discuss and design AI literacy interventions.<n>This paper applies an integrative review method to examine empirical and theoretical AI literacy studies published since 2020.
arXiv Detail & Related papers (2025-02-27T23:32:03Z) - Measuring Human Contribution in AI-Assisted Content Generation [66.06040950325969]
This study raises the research question of measuring human contribution in AI-assisted content generation.<n>By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI [73.75520820608232]
We introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities.
These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage.
Our evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration.
arXiv Detail & Related papers (2024-06-18T16:20:53Z) - Responsible AI: Portraits with Intelligent Bibliometrics [30.51687434548628]
This study defined responsible AI and identified its core principles.
Empirically, this study investigated 17,799 research articles contributed by the AI community since 2015.
An analysis of a core cohort comprising 380 articles from multiple disciplines captures the most recent advancements in responsible AI.
arXiv Detail & Related papers (2024-05-05T08:40:22Z) - Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing [25.572926673827165]
This case study highlights the importance of prompt design, output analysis, and recognizing the AI's limitations to ensure responsible and effective AI integration in scholarly work.
The paper contributes to the field of Human-Computer Interaction by exploring effective prompt strategies and providing a comparative analysis of Gen AI models.
arXiv Detail & Related papers (2024-04-23T19:06:39Z) - Untangling Critical Interaction with AI in Students Written Assessment [2.8078480738404]
Key challenge exists in ensuring that humans are equipped with the required critical thinking and AI literacy skills.
This paper provides a first step toward conceptualizing the notion of critical learner interaction with AI.
Using both theoretical models and empirical data, our preliminary findings suggest a general lack of Deep interaction with AI during the writing process.
arXiv Detail & Related papers (2024-04-10T12:12:50Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - MAILS -- Meta AI Literacy Scale: Development and Testing of an AI
Literacy Questionnaire Based on Well-Founded Competency Models and
Psychological Change- and Meta-Competencies [6.368014180870025]
The questionnaire should be modular (i.e., including different facets that can be used independently of each other) to be flexibly applicable in professional life.
We derived 60 items to represent different facets of AI Literacy according to Ng and colleagues conceptualisation of AI literacy.
Additional 12 items to represent psychological competencies such as problem solving, learning, and emotion regulation in regard to AI.
arXiv Detail & Related papers (2023-02-18T12:35:55Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.