Structured Like a Language Model: Analysing AI as an Automated Subject
- URL: http://arxiv.org/abs/2212.05058v1
- Date: Thu, 8 Dec 2022 21:58:43 GMT
- Title: Structured Like a Language Model: Analysing AI as an Automated Subject
- Authors: Liam Magee, Vanicka Arora, Luke Munn
- Abstract summary: We argue the intentional fictional projection of subjectivity onto large language models can yield an alternate frame through which AI behaviour can be analysed.
We trace a brief history of language models, culminating in the releases of systems that realise state-of-the-art natural language processing performance.
We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Drawing from the resources of psychoanalysis and critical media studies, in
this paper we develop an analysis of Large Language Models (LLMs) as automated
subjects. We argue the intentional fictional projection of subjectivity onto
LLMs can yield an alternate frame through which AI behaviour, including its
productions of bias and harm, can be analysed. First, we introduce language
models, discuss their significance and risks, and outline our case for
interpreting model design and outputs with support from psychoanalytic
concepts. We trace a brief history of language models, culminating with the
releases, in 2022, of systems that realise state-of-the-art natural language
processing performance. We engage with one such system, OpenAI's InstructGPT,
as a case study, detailing the layers of its construction and conducting
exploratory and semi-structured interviews with chatbots. These interviews
probe the model's moral imperatives to be helpful, truthful and harmless by
design. The model acts, we argue, as the condensation of often competing social
desires, articulated through the internet and harvested into training data,
which must then be regulated and repressed. This foundational structure can
however be redirected via prompting, so that the model comes to identify with,
and transfer, its commitments to the immediate human subject before it. In
turn, these automated productions of language can lead to the human subject
projecting agency upon the model, effecting occasionally further forms of
countertransference. We conclude that critical media methods and psychoanalytic
theory together offer a productive frame for grasping the powerful new
capacities of AI-driven language systems.
Related papers
- Proceedings of the First International Workshop on Next-Generation Language Models for Knowledge Representation and Reasoning (NeLaMKRR 2024) [16.282850445579857]
Reasoning is an essential component of human intelligence as it plays a fundamental role in our ability to think critically.
Recent leap forward in natural language processing, with the emergence of language models based on transformers, is hinting at the possibility that these models exhibit reasoning abilities.
Despite ongoing discussions about what reasoning is in language models, it is still not easy to pin down to what extent these models are actually capable of reasoning.
arXiv Detail & Related papers (2024-10-07T02:31:47Z) - Explanation, Debate, Align: A Weak-to-Strong Framework for Language Model Generalization [0.6629765271909505]
This paper introduces a novel approach to model alignment through weak-to-strong generalization in the context of language models.
Our results suggest that this facilitation-based approach not only enhances model performance but also provides insights into the nature of model alignment.
arXiv Detail & Related papers (2024-09-11T15:16:25Z) - Generative Artificial Intelligence: A Systematic Review and Applications [7.729155237285151]
This paper documents the systematic review and analysis of recent advancements and techniques in Generative AI.
The major impact that generative AI has made to date, has been in language generation with the development of large language models.
The paper ends with a discussion of Responsible AI principles, and the necessary ethical considerations for the sustainability and growth of these generative models.
arXiv Detail & Related papers (2024-05-17T18:03:59Z) - Detection of Machine-Generated Text: Literature Survey [0.0]
This literature survey aims to compile and synthesize accomplishments and developments in the field of machine-generated text.
It also gives an overview of machine-generated text trends and explores the larger societal implications.
arXiv Detail & Related papers (2024-01-02T01:44:15Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic
Control [140.48218261864153]
We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control.
Our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent capabilities from Internet-scale training.
arXiv Detail & Related papers (2023-07-28T21:18:02Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Grounded Decoding: Guiding Text Generation with Grounded Models for
Embodied Agents [111.15288256221764]
Grounded-decoding project aims to solve complex, long-horizon tasks in a robotic setting by leveraging the knowledge of both models.
We frame this as a problem similar to probabilistic filtering: decode a sequence that both has high probability under the language model and high probability under a set of grounded model objectives.
We demonstrate how such grounded models can be obtained across three simulation and real-world domains, and that the proposed decoding strategy is able to solve complex, long-horizon tasks in a robotic setting by leveraging the knowledge of both models.
arXiv Detail & Related papers (2023-03-01T22:58:50Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.