AI Literacy, Safety Awareness, and STEM Career Aspirations of Australian Secondary Students: Evaluating the Impact of Workshop Interventions
- URL: http://arxiv.org/abs/2601.22486v1
- Date: Fri, 30 Jan 2026 02:55:53 GMT
- Title: AI Literacy, Safety Awareness, and STEM Career Aspirations of Australian Secondary Students: Evaluating the Impact of Workshop Interventions
- Authors: Christian Bergh, Alexandra Vassar, Natasha Banks, Jessica Xu, Jake Renzella,
- Abstract summary: Deepfakes and other forms of synthetic media pose growing safety risks for adolescents.<n>This study evaluates the impact of Day of AI Australia's workshop-based intervention on Australian secondary students.
- Score: 38.350232667249095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes and other forms of synthetic media pose growing safety risks for adolescents, yet evidence on students' exposure and related behaviours remains limited. This study evaluates the impact of Day of AI Australia's workshop-based intervention designed to improve AI literacy and conceptual understanding among Australian secondary students (Years 7-10). Using a mixed-methods approach with pre- and post-intervention surveys (N=205 pre; N=163 post), we analyse changes in students' ability to identify AI in everyday tools, their understanding of AI ethics, training, and safety, and their interest in STEM-related careers. Baseline data revealed notable synthetic media risks: 82.4% of students reported having seen deepfakes, 18.5% reported sharing them, and 7.3% reported creating them. Results show higher self-reported AI knowledge and confidence after the intervention, alongside improved recognition of AI in widely used platforms such as Netflix, Spotify, and TikTok. This pattern suggests a shift from seeing these tools as merely "algorithm-based" to recognising them as AI-driven systems. Students also reported increased interest in STEM careers post-workshop; however, effect sizes were small, indicating that sustained approaches beyond one-off workshops may be needed to influence longer-term aspirations. Overall, the findings support scalable AI literacy programs that pair foundational AI concepts with an explicit emphasis on synthetic media safety.
Related papers
- Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems [47.03825808787752]
This paper transitions from literature review to practical countermeasures.<n>We report on improved AI-generated content through Large Language Models (LLMs) and multimodal systems.<n>We discuss mitigation strategies including LLM-based detection, inoculation approaches, and the dual-use nature of generative AI.
arXiv Detail & Related papers (2026-01-29T16:42:22Z) - Do Students Rely on AI? Analysis of Student-ChatGPT Conversations from a Field Study [10.71612026319996]
This study analyzed 315 student-AI conversations during a brief, quiz-based scenario across various STEM courses.<n>Students exhibited overall low reliance on AI and many of them could not effectively use AI for learning.<n>Certain behavioral metrics strongly predicted AI reliance, highlighting potential behavioral mechanisms to explain AI adoption.
arXiv Detail & Related papers (2025-08-27T20:00:27Z) - AI Literacy as a Key Driver of User Experience in AI-Powered Assessment: Insights from Socratic Mind [2.0272430076690027]
This study examines how students' AI literacy and prior exposure to AI technologies shape their perceptions of Socratic Mind.<n>Data from 309 undergraduates in Computer Science and Business courses were collected.
arXiv Detail & Related papers (2025-07-29T10:11:24Z) - Opting Out of Generative AI: a Behavioral Experiment on the Role of Education in Perplexity AI Avoidance [0.0]
This study investigates whether differences in formal education are associated with CAI avoidance.<n>Findings underscore education's central role in shaping AI adoption and the role of self-selection biases in AI-related research.
arXiv Detail & Related papers (2025-07-10T16:05:11Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - Social Scientists on the Role of AI in Research [2.2665233748698355]
We present a community-centric study drawing on 284 survey responses and 15 semi-structured interviews with social scientists.<n>We find that the use of AI in research settings has increased significantly among social scientists in step with the widespread popularity of generative AI (genAI)<n>Ethical concerns, particularly around automation bias, deskilling, research misconduct, complex interpretability, and representational harm, are raised in relation to genAI.
arXiv Detail & Related papers (2025-06-12T19:55:36Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Generative AI in Education: A Study of Educators' Awareness, Sentiments, and Influencing Factors [2.217351976766501]
This study delves into university instructors' experiences and attitudes toward AI language models.
We find no correlation between teaching style and attitude toward generative AI.
While CS educators show far more confidence in their technical understanding of generative AI tools, they show no more confidence in their ability to detect AI-generated work.
arXiv Detail & Related papers (2024-03-22T19:21:29Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.