Learning to Adopt Generative AI
- URL: http://arxiv.org/abs/2410.19806v2
- Date: Wed, 30 Oct 2024 17:18:59 GMT
- Title: Learning to Adopt Generative AI
- Authors: Lijia Ma, Xingchen Xu, Yumei He, Yong Tan,
- Abstract summary: We propose two forms of digital divide in the generative AI adoption process.
Lower-educated and non-white individuals derive higher utility gains from ChatGPT but learn about its utility at a slower rate.
Males, younger individuals, and those with an IT background not only derive higher utility per use from ChatGPT but also learn about its utility more rapidly.
- Score: 2.919534741469257
- License:
- Abstract: Recent advancements in generative AI, exemplified by ChatGPT, have dramatically transformed how people access information. Despite its powerful capabilities, the benefits it provides may not be equally distributed among individuals - a phenomenon referred to as the digital divide. Building upon prior literature, we propose two forms of digital divide in the generative AI adoption process: (i) the learning divide, capturing individuals' heterogeneous abilities to update their perceived utility of ChatGPT; and (ii) the utility divide, representing differences in individuals' actual utility derived from per use of ChatGPT. To evaluate these two divides, we develop a Bayesian learning model that incorporates demographic heterogeneities in both the utility and signal functions. Leveraging a six-month clickstream dataset, we estimate the model and find significant learning and utility divides across various demographic attributes. Interestingly, lower-educated and non-white individuals derive higher utility gains from ChatGPT but learn about its utility at a slower rate. Furthermore, males, younger individuals, and those with an IT background not only derive higher utility per use from ChatGPT but also learn about its utility more rapidly. Besides, we document a phenomenon termed the belief trap, wherein users underestimate ChatGPT's utility, opt not to use the tool, and consequently lack new experiences to update their perceptions, leading to continued underutilization. Our simulation further demonstrates that the learning divide can significantly affect the probability of falling into the belief trap, another form of the digital divide in adoption outcomes (i.e., outcome divide); however, offering training programs can alleviate the belief trap and mitigate the divide.
Related papers
- Impact of the Availability of ChatGPT on Software Development: A Synthetic Difference in Differences Estimation using GitHub Data [49.1574468325115]
ChatGPT is an AI tool that enhances software production efficiency.
We estimate ChatGPT's effects on the number of git pushes, repositories, and unique developers per 100,000 people.
These results suggest that AI tools like ChatGPT can substantially boost developer productivity, though further analysis is needed to address potential downsides such as low quality code and privacy concerns.
arXiv Detail & Related papers (2024-06-16T19:11:15Z) - The Emerging AI Divide in the United States [2.0359927301080116]
This study characterizes spatial differences in U.S. residents' knowledge of a new generative AI tool, ChatGPT.
We observe the highest rates of users searching for ChatGPT in West Coast states and persistently low rates of search in Appalachian and Gulf states.
Although generative AI technologies may be novel, early differences in uptake appear to be following familiar paths of digital marginalization.
arXiv Detail & Related papers (2024-04-18T08:33:35Z) - Economic and Financial Learning with Artificial Intelligence: A
Mixed-Methods Study on ChatGPT [0.05152756192881158]
This study explores ChatGPT's potential as an educational tool, focusing on user perceptions, experiences and learning outcomes.
The study reveals a notable positive shift in perceptions after exposure, underscoring the efficacy of ChatGPT.
However, challenges such as prompting effectiveness and information accuracy emerged as pivotal concerns.
arXiv Detail & Related papers (2024-02-23T11:55:43Z) - Analysis of the User Perception of Chatbots in Education Using A Partial
Least Squares Structural Equation Modeling Approach [0.0]
Key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, were studied.
Results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use (PEOU) and Perceived Usefulness (PU)
arXiv Detail & Related papers (2023-11-07T00:44:56Z) - Exploring User Perspectives on ChatGPT: Applications, Perceptions, and
Implications for AI-Integrated Education [40.38809129759498]
ChatGPT is most commonly used in the domains of higher education, K-12 education, and practical skills training.
On one hand, some users view it as a transformative tool capable of amplifying student self-efficacy and learning motivation.
On the other hand, there is a degree of apprehension among concerned users.
arXiv Detail & Related papers (2023-05-22T15:13:14Z) - Learning gain differences between ChatGPT and human tutor generated
algebra hints [4.438259529250529]
We conduct the first learning gain evaluation of ChatGPT by comparing the efficacy of its hints with hints authored by human tutors.
We find that 70% of hints produced by ChatGPT passed our manual quality checks and that both human and ChatGPT conditions produced positive learning gains.
arXiv Detail & Related papers (2023-02-14T07:20:48Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.