The Post-Turing Condition: Conceptualising Artificial Subjectivity and Synthetic Sociality
- URL: http://arxiv.org/abs/2601.12938v1
- Date: Mon, 19 Jan 2026 10:46:52 GMT
- Title: The Post-Turing Condition: Conceptualising Artificial Subjectivity and Synthetic Sociality
- Authors: Thorsten Jelinek, Patrick Glauner, Alvin Wang Graylin, Yubao Qiu,
- Abstract summary: In the Post-TurTuring era, artificial intelligence increasingly shapes social coordination and meaning formation.<n>The central challenge is whether processes of interpretation and shared reference are automated in ways that marginalize human participation.<n>This paper proposes Quadrangulation as a design principle for socially embedded AI systems.
- Score: 0.23332469289621782
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the Post-Turing era, artificial intelligence increasingly shapes social coordination and meaning formation rather than merely automating cognitive tasks. The central challenge is therefore not whether machines become conscious, but whether processes of interpretation and shared reference are progressively automated in ways that marginalize human participation. This paper introduces the PRMO framework, relating AI design trajectories to four constitutive dimensions of human subjectivity: Perception, Representation, Meaning, and the Real. Within this framework, Synthetic Sociality denotes a technological horizon in which artificial agents negotiate coherence and social order primarily among themselves, raising the structural risk of human exclusion from meaning formation. To address this risk, the paper proposes Quadrangulation as a design principle for socially embedded AI systems, requiring artificial agents to treat the human subject as a constitutive reference within shared contexts of meaning. This work is a conceptual perspective that contributes a structural vocabulary for analyzing AI systems at the intersection of computation and society, without proposing a specific technical implementation.
Related papers
- The Vibe-Automation of Automation: A Proactive Education Framework for Computer Science in the Age of Generative AI [0.7252027234425333]
generative artificial intelligence (GenAI) represents a qualitative shift in computer science.<n>GenAI operates by navigating contextual, semantic, and coherence rather than optimizing predefined objective metrics.<n>The paper proposes a conceptual framework structured across three analytical levels and three domains of action.
arXiv Detail & Related papers (2026-02-09T06:02:04Z) - The Principles of Human-like Conscious Machine [6.159611238789419]
We propose a substrate-independent, logically rigorous, and counterfeit-resistant sufficiency criterion for phenomenal consciousness.<n>We argue that any machine satisfying this criterion should be regarded as conscious with at least the same level of confidence with which we attribute consciousness to other humans.<n>We show that humans themselves can be viewed as machines that satisfy this framework and its principles.
arXiv Detail & Related papers (2025-09-21T01:11:30Z) - The next question after Turing's question: Introducing the Grow-AI test [51.56484100374058]
This study aims to extend the framework for assessing artificial intelligence, called GROW-AI.<n>GROW-AI is designed to answer the question "Can machines grow up?" -- a natural successor to the Turing Test.<n>The originality of the work lies in the conceptual transposition of the process of "growing" from the human world to that of artificial intelligence.
arXiv Detail & Related papers (2025-08-22T10:19:42Z) - Synthetic media and computational capitalism: towards a critical theory of artificial intelligence [0.0]
I argue that we need new critical methods capable of addressing both the technical specificity of AI systems and their role in restructuring forms of life under computational capitalism.<n>The paper concludes by suggesting that critical reflexivity is needed to engage with the algorithmic condition without being subsumed by it.
arXiv Detail & Related papers (2025-03-22T22:59:28Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - A.I. go by many names: towards a sociotechnical definition of artificial intelligence [0.0]
Defining artificial intelligence (AI) is a persistent challenge, often muddied by technical ambiguity and varying interpretations.
This essay makes a case for a sociotechnical definition of AI, which is essential for researchers who require clarity in their work.
arXiv Detail & Related papers (2024-10-17T11:25:50Z) - Position: Towards Bidirectional Human-AI Alignment [109.57781720848669]
We argue that the research community should explicitly define and critically reflect on "alignment" to account for the bidirectional and dynamic relationship between humans and AI.<n>We introduce the Bidirectional Human-AI Alignment framework, which not only incorporates traditional efforts to align AI with human values but also introduces the critical, underexplored dimension of aligning humans with AI.
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - Towards socially-competent and culturally-adaptive artificial agents
Expressive order, interactional disruptions and recovery strategies [0.0]
The overarching aim of this work is to set a framework to make the artificial agent socially-competent beyond dyadic interaction-interaction.
The paper highlights how this level of competence is achieved by focusing on just three dimensions: (i) social capability, (ii) relational role, and (iii) proximity.
arXiv Detail & Related papers (2023-08-06T15:47:56Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Human-centered Explainable AI: Towards a Reflective Sociotechnical
Approach [18.14698948294366]
We introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design.
It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems.
arXiv Detail & Related papers (2020-02-04T02:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.