Research on the Laws of Multimodal Perception and Cognition from a
Cross-cultural Perspective -- Taking Overseas Chinese Gardens as an Example
- URL: http://arxiv.org/abs/2312.17642v1
- Date: Fri, 29 Dec 2023 15:13:23 GMT
- Title: Research on the Laws of Multimodal Perception and Cognition from a
Cross-cultural Perspective -- Taking Overseas Chinese Gardens as an Example
- Authors: Ran Chen, Xueqi Yao, Jing Zhao, Shuhan Xu, Sirui Zhang, Yijun Mao
- Abstract summary: This study aims to explore the complex relationship between perceptual and cognitive interactions in multimodal data analysis.
It is found that evaluation content and images on social media can reflect individuals' concerns and sentiment responses.
- Score: 5.749458457122218
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This study aims to explore the complex relationship between perceptual and
cognitive interactions in multimodal data analysis,with a specific emphasis on
spatial experience design in overseas Chinese gardens. It is found that
evaluation content and images on social media can reflect individuals' concerns
and sentiment responses, providing a rich data base for cognitive research that
contains both sentimental and image-based cognitive information. Leveraging
deep learning techniques, we analyze textual and visual data from social media,
thereby unveiling the relationship between people's perceptions and sentiment
cognition within the context of overseas Chinese gardens. In addition, our
study introduces a multi-agent system (MAS)alongside AI agents. Each agent
explores the laws of aesthetic cognition through chat scene simulation combined
with web search. This study goes beyond the traditional approach of translating
perceptions into sentiment scores, allowing for an extension of the research
methodology in terms of directly analyzing texts and digging deeper into
opinion data. This study provides new perspectives for understanding aesthetic
experience and its impact on architecture and landscape design across diverse
cultural contexts, which is an essential contribution to the field of cultural
communication and aesthetic understanding.
Related papers
- Multilingual Dyadic Interaction Corpus NoXi+J: Toward Understanding Asian-European Non-verbal Cultural Characteristics and their Influences on Engagement [6.984291346424792]
We conduct a multilingual computational analysis of non-verbal features and investigate their role in engagement prediction.
We extracted multimodal non-verbal features, including speech acoustics, facial expressions, backchanneling and gestures.
We analyzed the influence of cultural differences in the input features of LSTM models trained to predict engagement for five language datasets.
arXiv Detail & Related papers (2024-09-09T18:37:34Z) - Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking [48.21982147529661]
This paper introduces a novel approach for massively multicultural knowledge acquisition.
Our method strategically navigates from densely informative Wikipedia documents on cultural topics to an extensive network of linked pages.
Our work marks an important step towards deeper understanding and bridging the gaps of cultural disparities in AI.
arXiv Detail & Related papers (2024-02-14T18:16:54Z) - Language-based Valence and Arousal Expressions between the United States and China: a Cross-Cultural Examination [6.122854363918857]
This paper explores cultural differences in affective expressions by comparing Twitter/X (geolocated to the US) and Sina Weibo (in Mainland China)
Using the NRC-VAD lexicon to measure valence and arousal, we identify distinct patterns of emotional expression across both platforms.
We uncover significant cross-cultural differences in arousal, with US users displaying higher emotional intensity than Chinese users.
arXiv Detail & Related papers (2024-01-10T16:32:25Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Vision+X: A Survey on Multimodal Learning in the Light of Data [64.03266872103835]
multimodal machine learning that incorporates data from various sources has become an increasingly popular research area.
We analyze the commonness and uniqueness of each data format mainly ranging from vision, audio, text, and motions.
We investigate the existing literature on multimodal learning from both the representation learning and downstream application levels.
arXiv Detail & Related papers (2022-10-05T13:14:57Z) - An Interdisciplinary Perspective on Evaluation and Experimental Design
for Visual Text Analytics: Position Paper [24.586485898038312]
In this paper, we focus on the issues of evaluating visual text analytics approaches.
We identify four key groups of challenges for evaluating visual text analytics approaches.
arXiv Detail & Related papers (2022-09-23T11:47:37Z) - Affective Image Content Analysis: Two Decades Review and New
Perspectives [132.889649256384]
We will comprehensively review the development of affective image content analysis (AICA) in the recent two decades.
We will focus on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence.
We discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
arXiv Detail & Related papers (2021-06-30T15:20:56Z) - Country Image in COVID-19 Pandemic: A Case Study of China [79.17323278601869]
Country image has a profound influence on international relations and economic development.
In the worldwide outbreak of COVID-19, countries and their people display different reactions.
In this study, we take China as a specific and typical case and investigate its image with aspect-based sentiment analysis on a large-scale Twitter dataset.
arXiv Detail & Related papers (2020-09-12T15:54:51Z) - Visual Sentiment Analysis from Disaster Images in Social Media [11.075683976162766]
This article focuses on visual sentiment analysis in a societal important domain, namely disaster analysis in social media.
We propose a deep visual sentiment analyzer for disaster related images, covering different aspects of visual sentiment analysis.
We believe the proposed system can contribute toward more livable communities by helping different stakeholders.
arXiv Detail & Related papers (2020-09-04T11:29:52Z) - Survey on Visual Sentiment Analysis [87.20223213370004]
This paper reviews pertinent publications and tries to present an exhaustive overview of the field of Visual Sentiment Analysis.
The paper also describes principles of design of general Visual Sentiment Analysis systems from three main points of view.
A formalization of the problem is discussed, considering different levels of granularity, as well as the components that can affect the sentiment toward an image in different ways.
arXiv Detail & Related papers (2020-04-24T10:15:22Z) - Deriving Emotions and Sentiments from Visual Content: A Disaster
Analysis Use Case [10.161936647987515]
Social networks and users' tendency towards sharing their feelings in text, visual and audio content has opened new opportunities and challenges in sentiment analysis.
This article introduces visual sentiment analysis and contrasts it with textual sentiment analysis with emphasis on the opportunities and challenges in this nascent research area.
We propose a deep visual sentiment analyzer for disaster-related images as a use-case, covering different aspects of visual sentiment analysis starting from data collection, annotation, model selection, implementation and evaluations.
arXiv Detail & Related papers (2020-02-03T08:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.