Generative AI Practices, Literacy, and Divides: An Empirical Analysis in the Italian Context
- URL: http://arxiv.org/abs/2512.03671v1
- Date: Wed, 03 Dec 2025 11:01:28 GMT
- Title: Generative AI Practices, Literacy, and Divides: An Empirical Analysis in the Italian Context
- Authors: Beatrice Savoldi, Giuseppe Attanasio, Olga Gorodetskaya, Marta Marchiori Manerba, Elisa Bassignana, Silvia Casola, Matteo Negri, Tommaso Caselli, Luisa Bentivogli, Alan Ramponi, Arianna Muti, Nicoletta Balbo, Debora Nozza,
- Abstract summary: This study presents the first comprehensive empirical mapping of GenAI adoption, usage patterns, and literacy in Italy.<n>Our findings reveal widespread adoption for both work and personal use, including sensitive tasks like emotional support and medical advice.<n>We identify a significant gender divide where women are half as likely to adopt GenAI and use it less frequently than men.
- Score: 32.495879271249414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of Artificial Intelligence (AI) language technologies, particularly generative AI (GenAI) chatbots accessible via conversational interfaces, is transforming digital interactions. While these tools hold societal promise, they also risk widening digital divides due to uneven adoption and low awareness of their limitations. This study presents the first comprehensive empirical mapping of GenAI adoption, usage patterns, and literacy in Italy, based on newly collected survey data from 1,906 Italian-speaking adults. Our findings reveal widespread adoption for both work and personal use, including sensitive tasks like emotional support and medical advice. Crucially, GenAI is supplanting other technologies to become a primary information source: this trend persists despite low user digital literacy, posing a risk as users struggle to recognize errors or misinformation. Moreover, we identify a significant gender divide -- particularly pronounced in older generations -- where women are half as likely to adopt GenAI and use it less frequently than men. While we find literacy to be a key predictor of adoption, it only partially explains this disparity, suggesting that other barriers are at play. Overall, our data provide granular insights into the multipurpose usage of GenAI, highlighting the dual need for targeted educational initiatives and further investigation into the underlying barriers to equitable participation that competence alone cannot explain.
Related papers
- Generative AI in Saudi Arabia: A National Survey of Adoption, Risks, and Public Perceptions [0.6010778467667774]
Generative Artificial Intelligence (GenAI) is rapidly becoming embedded in Saudi Arabia's digital transformation under Vision 2030.<n>This study provides an early snapshot of GenAI engagement among Saudi nationals.
arXiv Detail & Related papers (2026-01-26T07:40:41Z) - Generative AI in Sociological Research: State of the Discipline [0.0]
Generative artificial intelligence (GenAI) has garnered considerable attention for its potential utility in research and scholarship.<n>Early commentators have articulated concerns about how GenAI usage comes with enormous environmental costs, serious social risks, and a tendency to produce low-quality content.<n>Our study focuses on sociological research as our site, and here we present findings from a survey of 433 authors of articles published in 50 sociology journals in the last five years.
arXiv Detail & Related papers (2025-11-21T01:34:28Z) - "We need to avail ourselves of GenAI to enhance knowledge distribution": Empowering Older Adults through GenAI Literacy [0.49157446832511503]
Older adults often exhibit greater reservations about adopting emerging technologies.<n>This study examines strategies for delivering GenAI literacy to older adults.<n> Quantitative data indicated a trend toward improved AI literacy, though the results were not statistically significant.
arXiv Detail & Related papers (2025-06-06T16:38:37Z) - GenAI vs. Human Fact-Checkers: Accurate Ratings, Flawed Rationales [2.3475022003300055]
GPT-4o, one of the most used AI models in consumer applications, outperforms other models, but all models exhibit only moderate agreement with human coders.<n>We also assess the effectiveness of summarized versus full content inputs, finding that summarized content holds promise for improving efficiency without sacrificing accuracy.
arXiv Detail & Related papers (2025-02-20T17:47:40Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI [41.96102438774773]
This work presents the findings from a university-level competition, which challenged participants to design prompts for eliciting biased outputs from GenAI tools.
We quantitatively and qualitatively analyze the competition submissions and identify a diverse set of biases in GenAI and strategies employed by participants to induce bias in GenAI.
arXiv Detail & Related papers (2024-10-20T18:44:45Z) - Measuring Human Contribution in AI-Assisted Content Generation [66.06040950325969]
This study raises the research question of measuring human contribution in AI-assisted content generation.<n>By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - A Comprehensive Survey of AI-Generated Content (AIGC): A History of
Generative AI from GAN to ChatGPT [63.58711128819828]
ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC)
The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace.
arXiv Detail & Related papers (2023-03-07T20:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.