From Human-Centered to Social-Centered Artificial Intelligence:
Assessing ChatGPT's Impact through Disruptive Events
- URL: http://arxiv.org/abs/2306.00227v1
- Date: Wed, 31 May 2023 22:46:48 GMT
- Title: From Human-Centered to Social-Centered Artificial Intelligence:
Assessing ChatGPT's Impact through Disruptive Events
- Authors: Skyler Wang, Ned Cooper, Margaret Eby, Eun Seo Jo
- Abstract summary: The release of recent GPT models has been a watershed moment for artificial intelligence research and society at large.
ChatGPT's impressive proficiency across technical and creative domains led to its widespread adoption.
We argue that critiques of ChatGPT's impact have coalesced around its performance or other conventional Responsible AI evaluations relating to bias, toxicity, and 'hallucination'
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) and dialogue agents have existed for years, but
the release of recent GPT models has been a watershed moment for artificial
intelligence (AI) research and society at large. Immediately recognized for its
generative capabilities and versatility, ChatGPT's impressive proficiency
across technical and creative domains led to its widespread adoption. While
society grapples with the emerging cultural impacts of ChatGPT, critiques of
ChatGPT's impact within the machine learning community have coalesced around
its performance or other conventional Responsible AI evaluations relating to
bias, toxicity, and 'hallucination.' We argue that these latter critiques draw
heavily on a particular conceptualization of the 'human-centered' framework,
which tends to cast atomized individuals as the key recipients of both the
benefits and detriments of technology. In this article, we direct attention to
another dimension of LLMs and dialogue agents' impact: their effect on social
groups, institutions, and accompanying norms and practices. By illustrating
ChatGPT's social impact through three disruptive events, we challenge
individualistic approaches in AI development and contribute to ongoing debates
around the ethical and responsible implementation of AI systems. We hope this
effort will call attention to more comprehensive and longitudinal evaluation
tools and compel technologists to go beyond human-centered thinking and ground
their efforts through social-centered AI.
Related papers
- Designing and Evaluating Dialogue LLMs for Co-Creative Improvised Theatre [48.19823828240628]
This study presents Large Language Models (LLMs) deployed in a month-long live show at the Edinburgh Festival Fringe.
We explore the technical capabilities and constraints of on-the-spot multi-party dialogue.
Our human-in-the-loop methodology underlines the challenges of these LLMs in generating context-relevant responses.
arXiv Detail & Related papers (2024-05-11T23:19:42Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - The Social Impact of Generative AI: An Analysis on ChatGPT [0.7401425472034117]
The rapid development of Generative AI models has sparked heated discussions regarding their benefits, limitations, and associated risks.
Generative models hold immense promise across multiple domains, such as healthcare, finance, and education, to cite a few.
This paper adopts a methodology to delve into the societal implications of Generative AI tools, focusing primarily on the case of ChatGPT.
arXiv Detail & Related papers (2024-03-07T17:14:22Z) - Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT [10.907980864371213]
Playful interactions with AI systems naturally emerged as an important way for users to make sense of the technology.
We target this gap by investigating playful interactions exhibited by users of an emerging AI technology, ChatGPT.
Through a thematic analysis of 372 user-generated posts on the ChatGPT subreddit, we found that more than half of user discourse revolves around playful interactions.
arXiv Detail & Related papers (2024-01-16T14:44:13Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Digital Deception: Generative Artificial Intelligence in Social
Engineering and Phishing [7.1795069620810805]
This paper investigates the transformative role of Generative AI in Social Engineering (SE) attacks.
We use a theory of social engineering to identify three pillars where Generative AI amplifies the impact of SE attacks.
Our study aims to foster a deeper understanding of the risks, human implications, and countermeasures associated with this emerging paradigm.
arXiv Detail & Related papers (2023-10-15T07:55:59Z) - Deceptive AI Ecosystems: The Case of ChatGPT [8.128368463580715]
ChatGPT has gained popularity for its capability in generating human-like responses.
This paper investigates how ChatGPT operates in the real world where societal pressures influence its development and deployment.
We examine the ethical challenges stemming from ChatGPT's deceptive human-like interactions.
arXiv Detail & Related papers (2023-06-18T10:36:19Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - ChatGPT: More than a Weapon of Mass Deception, Ethical challenges and
responses from the Human-Centered Artificial Intelligence (HCAI) perspective [0.0]
This article explores the ethical problems arising from the use of ChatGPT as a kind of generative AI.
The main danger ChatGPT presents is the propensity to be used as a weapon of mass deception (WMD)
arXiv Detail & Related papers (2023-04-06T07:40:12Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.