Structuring User-Generated Content on Social Media with Multimodal
Aspect-Based Sentiment Analysis
- URL: http://arxiv.org/abs/2210.15377v1
- Date: Thu, 27 Oct 2022 12:38:10 GMT
- Title: Structuring User-Generated Content on Social Media with Multimodal
Aspect-Based Sentiment Analysis
- Authors: Miriam Ansch\"utz, Tobias Eder, Georg Groh
- Abstract summary: This paper shows to what extent machine learning can analyze and structure these databases.
An automated data analysis pipeline is deployed to provide insights into user-generated content for researchers in other domains.
- Score: 2.023920009396818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People post their opinions and experiences on social media, yielding rich
databases of end users' sentiments. This paper shows to what extent machine
learning can analyze and structure these databases. An automated data analysis
pipeline is deployed to provide insights into user-generated content for
researchers in other domains. First, the domain expert can select an image and
a term of interest. Then, the pipeline uses image retrieval to find all images
showing similar contents and applies aspect-based sentiment analysis to outline
users' opinions about the selected term. As part of an interdisciplinary
project between architecture and computer science researchers, an empirical
study of Hamburg's Elbphilharmonie was conveyed on 300 thousand posts from the
platform Flickr with the hashtag 'hamburg'. Image retrieval methods generated a
subset of slightly more than 1.5 thousand images displaying the
Elbphilharmonie. We found that these posts mainly convey a neutral or positive
sentiment towards it. With this pipeline, we suggest a new big data analysis
method that offers new insights into end-users opinions, e.g., for architecture
domain experts.
Related papers
- SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval [64.03631654052445]
Current benchmarks for evaluating MMIR performance in image-text pairing within the scientific domain show a notable gap.
We develop a specialised scientific MMIR benchmark by leveraging open-access paper collections.
This benchmark comprises 530K meticulously curated image-text pairs, extracted from figures and tables with detailed captions in scientific documents.
arXiv Detail & Related papers (2024-01-24T14:23:12Z) - Automatic Image Content Extraction: Operationalizing Machine Learning in
Humanistic Photographic Studies of Large Visual Archives [81.88384269259706]
We introduce Automatic Image Content Extraction framework for machine learning-based search and analysis of large image archives.
The proposed framework can be applied in several domains in humanities and social sciences.
arXiv Detail & Related papers (2022-04-05T12:19:24Z) - There is a Time and Place for Reasoning Beyond the Image [63.96498435923328]
Images often more significant than only the pixels to human eyes, as we can infer, associate, and reason with contextual information from other sources to establish a more complete picture.
We introduce TARA: a dataset with 16k images with their associated news, time and location automatically extracted from New York Times (NYT), and an additional 61k examples as distant supervision from WIT.
We show that there exists a 70% gap between a state-of-the-art joint model and human performance, which is slightly filled by our proposed model that uses segment-wise reasoning, motivating higher-level vision-language joint models that
arXiv Detail & Related papers (2022-03-01T21:52:08Z) - Using Social Media Images for Building Function Classification [12.99941371793082]
This study proposes a filtering pipeline to yield high quality, ground level imagery from large social media image datasets.
We analyze our method on a culturally diverse social media dataset from Flickr with more than 28 million images from 42 cities around the world.
Fine-tuned state-of-the-art architectures yield F1-scores of up to 0.51 on the filtered images.
arXiv Detail & Related papers (2022-02-15T11:05:10Z) - An AutoML-based Approach to Multimodal Image Sentiment Analysis [1.0499611180329804]
We propose a method that combines both textual and image individual sentiment analysis into a final fused classification based on AutoML.
Our method achieved state-of-the-art performance in the B-T4SA dataset, with 95.19% accuracy.
arXiv Detail & Related papers (2021-02-16T11:28:50Z) - A Decade Survey of Content Based Image Retrieval using Deep Learning [13.778851745408133]
This paper presents a comprehensive survey of deep learning based developments in the past decade for content based image retrieval.
The similarity between the representative features of the query image and dataset images is used to rank the images for retrieval.
Deep learning has emerged as a dominating alternative of hand-designed feature engineering from a decade.
arXiv Detail & Related papers (2020-11-23T02:12:30Z) - Visual Sentiment Analysis from Disaster Images in Social Media [11.075683976162766]
This article focuses on visual sentiment analysis in a societal important domain, namely disaster analysis in social media.
We propose a deep visual sentiment analyzer for disaster related images, covering different aspects of visual sentiment analysis.
We believe the proposed system can contribute toward more livable communities by helping different stakeholders.
arXiv Detail & Related papers (2020-09-04T11:29:52Z) - On Creating Benchmark Dataset for Aerial Image Interpretation: Reviews,
Guidances and Million-AID [57.71601467271486]
This article discusses the problem of how to efficiently prepare a suitable benchmark dataset for RS image interpretation.
We first analyze the current challenges of developing intelligent algorithms for RS image interpretation with bibliometric investigations.
Following the presented guidances, we also provide an example on building RS image dataset, i.e., Million-AID, a new large-scale benchmark dataset.
arXiv Detail & Related papers (2020-06-22T17:59:00Z) - From ImageNet to Image Classification: Contextualizing Progress on
Benchmarks [99.19183528305598]
We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset.
Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for.
arXiv Detail & Related papers (2020-05-22T17:39:16Z) - Survey on Visual Sentiment Analysis [87.20223213370004]
This paper reviews pertinent publications and tries to present an exhaustive overview of the field of Visual Sentiment Analysis.
The paper also describes principles of design of general Visual Sentiment Analysis systems from three main points of view.
A formalization of the problem is discussed, considering different levels of granularity, as well as the components that can affect the sentiment toward an image in different ways.
arXiv Detail & Related papers (2020-04-24T10:15:22Z) - Deriving Emotions and Sentiments from Visual Content: A Disaster
Analysis Use Case [10.161936647987515]
Social networks and users' tendency towards sharing their feelings in text, visual and audio content has opened new opportunities and challenges in sentiment analysis.
This article introduces visual sentiment analysis and contrasts it with textual sentiment analysis with emphasis on the opportunities and challenges in this nascent research area.
We propose a deep visual sentiment analyzer for disaster-related images as a use-case, covering different aspects of visual sentiment analysis starting from data collection, annotation, model selection, implementation and evaluations.
arXiv Detail & Related papers (2020-02-03T08:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.