8bit-GPT: Exploring Human-AI Interaction on Obsolete Macintosh Operating Systems
- URL: http://arxiv.org/abs/2511.05025v1
- Date: Fri, 07 Nov 2025 06:56:04 GMT
- Title: 8bit-GPT: Exploring Human-AI Interaction on Obsolete Macintosh Operating Systems
- Authors: Hala Sheta,
- Abstract summary: 8bit-GPT is a language model simulated on a legacy Macintosh Operating System.<n>This work aims to foreground the presence of chatbots as a tool by defamiliarizing the interface and prioritizing inefficient interaction.
- Score: 0.8122270502556375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of assistive chatbots offering efficient, personalized communication has driven widespread over-reliance on them for decision-making, information-seeking and everyday tasks. This dependence was found to have adverse consequences on information retention as well as lead to superficial emotional attachment. As such, this work introduces 8bit-GPT; a language model simulated on a legacy Macintosh Operating System, to evoke reflection on the nature of Human-AI interaction and the consequences of anthropomorphic rhetoric. Drawing on reflective design principles such as slow-technology and counterfunctionality, this work aims to foreground the presence of chatbots as a tool by defamiliarizing the interface and prioritizing inefficient interaction, creating a friction between the familiar and not.
Related papers
- Learning Whole-Body Human-Humanoid Interaction from Human-Human Demonstrations [63.80827184637476]
We introduce D-STAR, a hierarchical policy that disentangles when to act from where to act.<n>We validate our framework through extensive and rigorous simulations.
arXiv Detail & Related papers (2026-01-14T14:37:06Z) - Neural Transparency: Mechanistic Interpretability Interfaces for Anticipating Model Behaviors for Personalized AI [9.383958408772694]
We introduce an interface that enables neural transparency by exposing language model internals during chatbots design.<n>Our approach extracts behavioral trait vectors by computing differences in neural activations between contrastive system prompts that elicit opposing behaviors.<n>This work offers a path for how interpretability can be operationalized for non-technical users, establishing a foundation for safer, more aligned human-AI interactions.
arXiv Detail & Related papers (2025-10-31T20:03:52Z) - HUMOF: Human Motion Forecasting in Interactive Social Scenes [29.621970821619424]
Complex scenes present significant challenges for predicting human behaviour due to the abundance of interaction information.<n>We propose an effective method for human motion forecasting in interactive scenes.<n>Our method achieves state-of-the-art performance across four public datasets.
arXiv Detail & Related papers (2025-06-04T09:21:54Z) - FABG : End-to-end Imitation Learning for Embodied Affective Human-Robot Interaction [3.8177867835232004]
This paper proposes FABG (Facial Affective Behavior Generation), an end-to-end imitation learning system for human-robot interaction.<n>We develop an immersive virtual reality (VR) demonstration system that allows operators to perceive stereoscopic environments.<n>We deploy FABG on a real-world 25-degree-of-freedom humanoid robot, validating its effectiveness through four fundamental interaction tasks.
arXiv Detail & Related papers (2025-03-03T09:58:04Z) - Multi-face emotion detection for effective Human-Robot Interaction [0.0]
This research proposes a facial emotion detection interface integrated into a mobile humanoid robot.<n>Various deep neural network models for facial expression recognition were developed and evaluated.
arXiv Detail & Related papers (2025-01-13T11:12:47Z) - Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation [38.227022474450834]
We present an addressee estimation model with improved performance in comparison with the previous state-of-the-art.<n>We also propose several ways to incorporate explainability and transparency in the aforementioned architecture.
arXiv Detail & Related papers (2024-05-20T13:09:32Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - VIRT: Improving Representation-based Models for Text Matching through
Virtual Interaction [50.986371459817256]
We propose a novel textitVirtual InteRacTion mechanism, termed as VIRT, to enable full and deep interaction modeling in representation-based models.
VIRT asks representation-based encoders to conduct virtual interactions to mimic the behaviors as interaction-based models do.
arXiv Detail & Related papers (2021-12-08T09:49:28Z) - INVIGORATE: Interactive Visual Grounding and Grasping in Clutter [56.00554240240515]
INVIGORATE is a robot system that interacts with human through natural language and grasps a specified object in clutter.
We train separate neural networks for object detection, for visual grounding, for question generation, and for OBR detection and grasping.
We build a partially observable Markov decision process (POMDP) that integrates the learned neural network modules.
arXiv Detail & Related papers (2021-08-25T07:35:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.