Gender, Age, and Technology Education Influence the Adoption and
Appropriation of LLMs
- URL: http://arxiv.org/abs/2310.06556v1
- Date: Tue, 10 Oct 2023 12:11:39 GMT
- Title: Gender, Age, and Technology Education Influence the Adoption and
Appropriation of LLMs
- Authors: Fiona Draxler, Daniel Buschek, Mikke Tavast, Perttu H\"am\"al\"ainen,
Albrecht Schmidt, Juhi Kulshrestha, Robin Welsch
- Abstract summary: Large Language Models (LLMs) have become increasingly integrated into critical activities of daily life.
This study investigates the usage of LLMs among 1,500 representative US citizens.
- Score: 33.36152215322338
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs) such as ChatGPT have become increasingly
integrated into critical activities of daily life, raising concerns about
equitable access and utilization across diverse demographics. This study
investigates the usage of LLMs among 1,500 representative US citizens.
Remarkably, 42% of participants reported utilizing an LLM. Our findings reveal
a gender gap in LLM technology adoption (more male users than female users)
with complex interaction patterns regarding age. Technology-related education
eliminates the gender gap in our sample. Moreover, expert users are more likely
than novices to list professional tasks as typical application scenarios,
suggesting discrepancies in effective usage at the workplace. These results
underscore the importance of providing education in artificial intelligence in
our technology-driven society to promote equitable access to and benefits from
LLMs. We urge for both international replication beyond the US and longitudinal
observation of adoption.
Related papers
- REALM: A Dataset of Real-World LLM Use Cases [69.57194370666876]
REALM is a dataset of over 94,000 LLM use cases collected from Reddit and news articles.
Realm captures two key dimensions: the diverse applications of LLMs and the demographics of their users.
It categorizes LLM applications and explores how users' occupations relate to the types of applications they use.
arXiv Detail & Related papers (2025-03-24T15:39:25Z) - Is ChatGPT Massively Used by Students Nowadays? A Survey on the Use of Large Language Models such as ChatGPT in Educational Settings [0.25782420501870296]
This study investigates how 395 students aged 13 to 25 years old in France and Italy integrate Large Language Models (LLMs) into their educational routines.
Key findings include the widespread use of these tools across all age groups and disciplines.
Results also show gender disparities, raising concerns about an emerging AI literacy and technological gender gap.
arXiv Detail & Related papers (2024-12-23T11:29:44Z) - Embracing AI in Education: Understanding the Surge in Large Language Model Use by Secondary Students [53.20318273452059]
Large language models (LLMs) like OpenAI's ChatGPT have opened up new avenues in education.
Despite school restrictions, our survey of over 300 middle and high school students revealed that a remarkable 70% of students have utilized LLMs.
We propose a few ideas to address such issues, including subject-specific models, personalized learning, and AI classrooms.
arXiv Detail & Related papers (2024-11-27T19:19:34Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Are You Human? An Adversarial Benchmark to Expose LLMs [2.6528263069045126]
Large Language Models (LLMs) have demonstrated an alarming ability to impersonate humans in conversation.
We evaluate text-based prompts designed as challenges to expose LLM imposters in real-time.
arXiv Detail & Related papers (2024-10-12T15:33:50Z) - Secret Use of Large Language Model (LLM) [25.452542727215178]
Large Language Models (LLMs) have decentralized the responsibility for the transparency of AI usage.
Our study investigated the contexts and causes behind the secret use of LLMs.
We found that such secretive behavior is often triggered by certain tasks, transcending demographic and personality differences among users.
arXiv Detail & Related papers (2024-09-28T20:31:53Z) - AI Meets the Classroom: When Does ChatGPT Harm Learning? [0.0]
We study how generative AI and specifically large language models (LLMs) impact learning in coding classes.
We show across three studies that LLM usage can have positive and negative effects on learning outcomes.
arXiv Detail & Related papers (2024-08-29T17:07:46Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities for critical thinking in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - The GPT Surprise: Offering Large Language Model Chat in a Massive Coding Class Reduced Engagement but Increased Adopters Exam Performances [26.688772122455745]
Large language models (LLMs) are quickly being adopted in a wide range of learning experiences.
We conducted a large-scale randomized control trial with 5,831 students from 146 countries in an online coding class.
We estimate positive benefits on exam performance for adopters, the students who used the tool, but over all students, the advertisement of GPT-4 led to a significant average decrease in exam participation.
arXiv Detail & Related papers (2024-04-25T15:39:22Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Aligning Large Language Models with Human: A Survey [53.6014921995006]
Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks.
Despite their notable performance, these models are prone to certain limitations such as misunderstanding human instructions, generating potentially biased content, or factually incorrect information.
This survey presents a comprehensive overview of these alignment technologies, including the following aspects.
arXiv Detail & Related papers (2023-07-24T17:44:58Z) - Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use
Large Language Models for Text Production Tasks [12.723777984461693]
Large language models (LLMs) are remarkable data annotators.
Crowdsourcing, an important, inexpensive way to obtain human annotations, may itself be impacted by LLMs.
We estimate that 33-46% of crowd workers used LLMs when completing a task.
arXiv Detail & Related papers (2023-06-13T16:46:24Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.