IMAGINE: An Integrated Model of Artificial Intelligence-Mediated
Communication Effects
- URL: http://arxiv.org/abs/2212.08658v1
- Date: Tue, 13 Dec 2022 19:48:38 GMT
- Title: IMAGINE: An Integrated Model of Artificial Intelligence-Mediated
Communication Effects
- Authors: Frederic Guerrero-Sole
- Abstract summary: I propose the definition of the Integrated Model of Artificial Intelligence-Mediated Communication Effects (IMAGINE)
The conceptual framework proposed is aimed to help scholars theorizing and doing research in a scenario of continuous real-time connection between AI measurement of people's responses to media, and the AI creation of content.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) is transforming all fields of knowledge and
production. From surgery, autonomous driving, to image and video creation, AI
seems to make possible hitherto unimaginable processes of automation and
efficient creation. Media and communication are not an exception, and we are
currently witnessing the dawn of powerful AI tools capable of creating artistic
images from simple keywords, or to capture emotions from facial expression.
These examples may be only the beginning of what can be in the future the
engines for automatic AI real time creation of media content linked to the
emotional and behavioural responses of individuals. Although it may seem we are
still far from there, it is already the moment to adapt our theories about
media to the hypothetical scenario in which content production can be done
without human intervention, and governed by the controlled any reactions of the
individual to the exposure to media content. Following that, I propose the
definition of the Integrated Model of Artificial Intelligence-Mediated
Communication Effects (IMAGINE), and its consequences on the way we understand
media evolution (Scolari, 2012) and we think about media effects (Potter,
2010). The conceptual framework proposed is aimed to help scholars theorizing
and doing research in a scenario of continuous real-time connection between AI
measurement of people's responses to media, and the AI creation of content,
with the objective of optimizing and maximizing the processes of influence.
Parasocial interaction and real-time beautification are used as examples to
model the functioning of the IMAGINE process.
Related papers
- Movie Gen: SWOT Analysis of Meta's Generative AI Foundation Model for Transforming Media Generation, Advertising, and Entertainment Industries [0.8463972278020965]
This paper presents a comprehensive SWOT analysis of Metas Movie Gen, a cutting-edge generative AI foundation model.
We explore its strengths, including high-resolution video generation, precise editing, and seamless audio integration.
We examine the evolving regulatory and ethical considerations surrounding generative AI, focusing on issues like content authenticity, cultural representation, and responsible use.
arXiv Detail & Related papers (2024-12-05T03:01:53Z) - Prediction with Action: Visual Policy Learning via Joint Denoising Process [14.588908033404474]
PAD is a visual policy learning framework that unifies image Prediction and robot Action.
DiT seamlessly integrates images and robot states, enabling the simultaneous prediction of future images and robot actions.
Pad outperforms previous methods, achieving a significant 26.3% relative improvement on the full Metaworld benchmark.
arXiv Detail & Related papers (2024-11-27T09:54:58Z) - Maia: A Real-time Non-Verbal Chat for Human-AI Interaction [10.580858171606167]
We propose an alternative to text-based Human-AI interaction.
By leveraging nonverbal visual communication, through facial expressions, head and body movements, we aim to enhance engagement.
Our approach is not art-specific and can be adapted to various paintings, animations, and avatars.
arXiv Detail & Related papers (2024-02-09T13:07:22Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - ArK: Augmented Reality with Knowledge Interactive Emergent Ability [115.72679420999535]
We develop an infinite agent that learns to transfer knowledge memory from general foundation models to novel domains.
The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK)
We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes.
arXiv Detail & Related papers (2023-05-01T17:57:01Z) - Learning Universal Policies via Text-Guided Video Generation [179.6347119101618]
A goal of artificial intelligence is to construct an agent that can solve a wide variety of tasks.
Recent progress in text-guided image synthesis has yielded models with an impressive ability to generate complex novel images.
We investigate whether such tools can be used to construct more general-purpose agents.
arXiv Detail & Related papers (2023-01-31T21:28:13Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Human in the Loop for Machine Creativity [0.0]
We conceptualize existing and future human-in-the-loop (HITL) approaches for creative applications.
We examine and speculate on long term implications for models, interfaces, and machine creativity.
We envision multimodal HITL processes, where texts, visuals, sounds, and other information are coupled together, with automated analysis of humans and environments.
arXiv Detail & Related papers (2021-10-07T15:42:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.