IMAGINE: An Integrated Model of Artificial Intelligence-Mediated
Communication Effects
- URL: http://arxiv.org/abs/2212.08658v1
- Date: Tue, 13 Dec 2022 19:48:38 GMT
- Title: IMAGINE: An Integrated Model of Artificial Intelligence-Mediated
Communication Effects
- Authors: Frederic Guerrero-Sole
- Abstract summary: I propose the definition of the Integrated Model of Artificial Intelligence-Mediated Communication Effects (IMAGINE)
The conceptual framework proposed is aimed to help scholars theorizing and doing research in a scenario of continuous real-time connection between AI measurement of people's responses to media, and the AI creation of content.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) is transforming all fields of knowledge and
production. From surgery, autonomous driving, to image and video creation, AI
seems to make possible hitherto unimaginable processes of automation and
efficient creation. Media and communication are not an exception, and we are
currently witnessing the dawn of powerful AI tools capable of creating artistic
images from simple keywords, or to capture emotions from facial expression.
These examples may be only the beginning of what can be in the future the
engines for automatic AI real time creation of media content linked to the
emotional and behavioural responses of individuals. Although it may seem we are
still far from there, it is already the moment to adapt our theories about
media to the hypothetical scenario in which content production can be done
without human intervention, and governed by the controlled any reactions of the
individual to the exposure to media content. Following that, I propose the
definition of the Integrated Model of Artificial Intelligence-Mediated
Communication Effects (IMAGINE), and its consequences on the way we understand
media evolution (Scolari, 2012) and we think about media effects (Potter,
2010). The conceptual framework proposed is aimed to help scholars theorizing
and doing research in a scenario of continuous real-time connection between AI
measurement of people's responses to media, and the AI creation of content,
with the objective of optimizing and maximizing the processes of influence.
Parasocial interaction and real-time beautification are used as examples to
model the functioning of the IMAGINE process.
Related papers
- Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset [113.25650486482762]
We introduce the Seamless Interaction dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage.<n>This dataset enables the development of AI technologies that understand dyadic embodied dynamics.<n>We develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech.
arXiv Detail & Related papers (2025-06-27T18:09:49Z) - Embodied AI Agents: Modeling the World [188.85697524284834]
This paper describes our research on AI agents embodied in visual, virtual or physical forms.<n>We propose that the development of world models is central to reasoning and planning of embodied AI agents.<n>We also propose to learn the mental world model of users to enable better human-agent collaboration.
arXiv Detail & Related papers (2025-06-27T16:05:34Z) - Leveraging LLMs with Iterative Loop Structure for Enhanced Social Intelligence in Video Question Answering [13.775516653315103]
Social intelligence is essential for effective communication and adaptive responses.
Current video-based methods for social intelligence rely on general video recognition or emotion recognition techniques.
We propose the Looped Video Debating framework, which integrates Large Language Models with visual information.
arXiv Detail & Related papers (2025-03-27T06:14:21Z) - Movie Gen: SWOT Analysis of Meta's Generative AI Foundation Model for Transforming Media Generation, Advertising, and Entertainment Industries [0.8463972278020965]
This paper presents a comprehensive SWOT analysis of Metas Movie Gen, a cutting-edge generative AI foundation model.
We explore its strengths, including high-resolution video generation, precise editing, and seamless audio integration.
We examine the evolving regulatory and ethical considerations surrounding generative AI, focusing on issues like content authenticity, cultural representation, and responsible use.
arXiv Detail & Related papers (2024-12-05T03:01:53Z) - KI-Bilder und die Widerständigkeit der Medienkonvergenz: Von primärer zu sekundärer Intermedialität? [0.0]
Article presents some current observations on the integration of AI-generated images within processes of media convergence.
It draws on two different concepts of intermediality: primary intermediality and secondary intermediality.
The thesis is that there can be no talk of a seamless 'integration' of AI images into the wider media landscape at the moment.
arXiv Detail & Related papers (2024-06-21T09:15:19Z) - Maia: A Real-time Non-Verbal Chat for Human-AI Interaction [10.580858171606167]
We propose an alternative to text-based Human-AI interaction.
By leveraging nonverbal visual communication, through facial expressions, head and body movements, we aim to enhance engagement.
Our approach is not art-specific and can be adapted to various paintings, animations, and avatars.
arXiv Detail & Related papers (2024-02-09T13:07:22Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - ArK: Augmented Reality with Knowledge Interactive Emergent Ability [115.72679420999535]
We develop an infinite agent that learns to transfer knowledge memory from general foundation models to novel domains.
The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK)
We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes.
arXiv Detail & Related papers (2023-05-01T17:57:01Z) - A Portrait of Emotion: Empowering Self-Expression through AI-Generated
Art [0.0]
We investigated the potential and limitations of generative artificial intelligence (AI) in reflecting the authors' cognitive processes through creative expression.
Results show a preference for images based on the descriptions of the authors' emotions over the main events.
Our research framework with generative AIs can help design AI-based interventions in related fields.
arXiv Detail & Related papers (2023-04-26T06:54:53Z) - Learning Universal Policies via Text-Guided Video Generation [179.6347119101618]
A goal of artificial intelligence is to construct an agent that can solve a wide variety of tasks.
Recent progress in text-guided image synthesis has yielded models with an impressive ability to generate complex novel images.
We investigate whether such tools can be used to construct more general-purpose agents.
arXiv Detail & Related papers (2023-01-31T21:28:13Z) - Creating Multimodal Interactive Agents with Imitation and
Self-Supervised Learning [20.02604302565522]
A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language.
Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment.
We show that imitation learning of human-human interactions in a simulated world, in conjunction with self-supervised learning, is sufficient to produce a multimodal interactive agent, which we call MIA, that successfully interacts with non-adversarial humans 75% of the time.
arXiv Detail & Related papers (2021-12-07T15:17:27Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Human in the Loop for Machine Creativity [0.0]
We conceptualize existing and future human-in-the-loop (HITL) approaches for creative applications.
We examine and speculate on long term implications for models, interfaces, and machine creativity.
We envision multimodal HITL processes, where texts, visuals, sounds, and other information are coupled together, with automated analysis of humans and environments.
arXiv Detail & Related papers (2021-10-07T15:42:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.