"Everyone's using it, but no one is allowed to talk about it": College Students' Experiences Navigating the Higher Education Environment in a Generative AI World
- URL: http://arxiv.org/abs/2602.17720v1
- Date: Tue, 17 Feb 2026 19:00:28 GMT
- Title: "Everyone's using it, but no one is allowed to talk about it": College Students' Experiences Navigating the Higher Education Environment in a Generative AI World
- Authors: Yue Fu, Yifan Lin, Yessica Wang, Sarah Tran, Alexis Hiniker,
- Abstract summary: Findings show that institutional pressure factors like deadlines, exam cycles, and grading lead students to engage with AI even when they think it undermines their learning.<n>Current institutional AI policies are perceived as generic, inconsistent, and confusing, resulting in routine noncompliance.<n>Students develop value-based self-regulation strategies, but environmental pressures create a gap between students' intentions and their behaviors.
- Score: 13.109430455513882
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Higher education students are increasingly using generative AI in their academic work. However, existing institutional practices have not yet adapted to this shift. Through semi-structured interviews with 23 college students, our study examines the environmental and social factors that influence students' use of AI. Findings show that institutional pressure factors like deadlines, exam cycles, and grading lead students to engage with AI even when they think it undermines their learning. Social influences, particularly peer micro-communities, establish de-facto AI norms regardless of official AI policies. Campus-wide ``AI shame'' is prevalent, often pushing AI use underground. Current institutional AI policies are perceived as generic, inconsistent, and confusing, resulting in routine noncompliance. Additionally, students develop value-based self-regulation strategies, but environmental pressures create a gap between students' intentions and their behaviors. Our findings show student AI use to be a situated practice, and we discuss implications for institutions, instructors, and system tool designers to effectively support student learning with AI.
Related papers
- Excited, Skeptical, or Worried? A Multi-Institutional Study of Student Views on Generative AI in Computing Education [1.1433339874669624]
We present the results of a multi-institutional survey with responses from 410 students enrolled in the computing programs of 23 educational institutions.<n>Students from all types express excitement, optimism, and gratitude toward GenAI.
arXiv Detail & Related papers (2025-10-03T15:34:44Z) - Do Students Rely on AI? Analysis of Student-ChatGPT Conversations from a Field Study [10.71612026319996]
This study analyzed 315 student-AI conversations during a brief, quiz-based scenario across various STEM courses.<n>Students exhibited overall low reliance on AI and many of them could not effectively use AI for learning.<n>Certain behavioral metrics strongly predicted AI reliance, highlighting potential behavioral mechanisms to explain AI adoption.
arXiv Detail & Related papers (2025-08-27T20:00:27Z) - "All Roads Lead to ChatGPT": How Generative AI is Eroding Social Interactions and Student Learning Communities [0.4188114563181615]
We investigate the potential impacts of generative AI on social interactions, peer learning, and classroom dynamics.<n>Our findings suggest that help-seeking requests are now often mediated by generative AI.<n>Students reported feeling increasingly isolated and demotivated as the social support systems they rely on begin to break.
arXiv Detail & Related papers (2025-04-14T00:40:58Z) - Assessing Computer Science Student Attitudes Towards AI Ethics and Policy [8.927858368749204]
The attitudes and competencies with respect to AI ethics and policy among post-secondary students studying computer science (CS) are of particular interest.<n>Despite computer scientists being at the forefront of learning about and using AI tools, their attitudes towards AI remain understudied.
arXiv Detail & Related papers (2025-04-06T23:03:47Z) - Auto-assessment of assessment: A conceptual framework towards fulfilling the policy gaps in academic assessment practices [4.770873744131964]
We surveyed 117 academics from three countries (UK, UAE, and Iraq)
We identified that most academics retain positive opinions regarding AI in education.
For the first time, we propose a novel AI framework for autonomously evaluating students' work.
arXiv Detail & Related papers (2024-10-28T15:22:37Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications [44.99833362998488]
We identify three categories of AI use -- campaign operations, voter outreach, and deception.<n>While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations.<n>Deception AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Learning to Prompt in the Classroom to Understand AI Limits: A pilot
study [35.06607166918901]
Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems.
However, excitement has led to negative sentiments, even as AI methods demonstrate remarkable contributions.
A pilot educational intervention was performed in a high school with 21 students.
arXiv Detail & Related papers (2023-07-04T07:51:37Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Gathering Strength, Gathering Storms: The One Hundred Year Study on
Artificial Intelligence (AI100) 2021 Study Panel Report [40.38252510399319]
"Gathering Strength, Gathering Storms" is the second report in the "One Hundred Year Study on Artificial Intelligence" project.
It was written by a panel of 17 study authors, each of whom is deeply rooted in AI research.
The report concludes that AI has made a major leap from the lab to people's lives in recent years.
arXiv Detail & Related papers (2022-10-27T21:00:36Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.