Data Uncertainty-Aware Learning for Multimodal Aspect-based Sentiment Analysis
- URL: http://arxiv.org/abs/2412.01249v1
- Date: Mon, 02 Dec 2024 08:13:40 GMT
- Title: Data Uncertainty-Aware Learning for Multimodal Aspect-based Sentiment Analysis
- Authors: Hao Yang, Zhenyu Zhang, Yanyan Zhao, Bing Qin,
- Abstract summary: We propose a novel data uncertainty-aware multimodal aspect-based sentiment analysis approach, UA-MABSA.
UA-MABSA adopts a novel quality assessment strategy that takes into account both the image quality and the aspect-based cross-modal relevance.
Our method achieves state-of-the-art (SOTA) performance on the Twitter-2015 dataset.
- Score: 31.84130191570486
- License:
- Abstract: As a fine-grained task, multimodal aspect-based sentiment analysis (MABSA) mainly focuses on identifying aspect-level sentiment information in the text-image pair. However, we observe that it is difficult to recognize the sentiment of aspects in low-quality samples, such as those with low-resolution images that tend to contain noise. And in the real world, the quality of data usually varies for different samples, such noise is called data uncertainty. But previous works for the MABSA task treat different quality samples with the same importance and ignored the influence of data uncertainty. In this paper, we propose a novel data uncertainty-aware multimodal aspect-based sentiment analysis approach, UA-MABSA, which weighted the loss of different samples by the data quality and difficulty. UA-MABSA adopts a novel quality assessment strategy that takes into account both the image quality and the aspect-based cross-modal relevance, thus enabling the model to pay more attention to high-quality and challenging samples. Extensive experiments show that our method achieves state-of-the-art (SOTA) performance on the Twitter-2015 dataset. Further analysis demonstrates the effectiveness of the quality assessment strategy.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Investigating the Quality of DermaMNIST and Fitzpatrick17k Dermatological Image Datasets [17.01966057343415]
Several factors can impact data quality, such as the presence of duplicates, data leakage across train-test partitions, mislabeled images, and the absence of a well-defined test partition.
We conduct meticulous analyses of three popular dermatological image datasets: DermaMNIST, its source HAM10000, and Fitzpatrick17k.
arXiv Detail & Related papers (2024-01-25T20:29:01Z) - Quality Assurance of A GPT-based Sentiment Analysis System: Adversarial
Review Data Generation and Detection [10.567108680774782]
GPT-based sentiment analysis model is first constructed and studied as the reference in AI quality analysis.
Quality analysis related to data adequacy is implemented, including employing the content-based approach to generate reasonable adversarial review comments.
Experiments based on Amazon.com review data and a fine-tuned GPT model were implemented.
arXiv Detail & Related papers (2023-10-09T00:01:05Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Uncertainty-Aware Multi-View Representation Learning [53.06828186507994]
We devise a novel unsupervised multi-view learning approach, termed as Dynamic Uncertainty-Aware Networks (DUA-Nets)
Guided by the uncertainty of data estimated from the generation perspective, intrinsic information from multiple views is integrated to obtain noise-free representations.
Our model achieves superior performance in extensive experiments and shows the robustness to noisy data.
arXiv Detail & Related papers (2022-01-15T07:16:20Z) - A Decoupled Uncertainty Model for MRI Segmentation Quality Estimation [4.104181348044472]
We propose a novel CNN architecture to decouple sources of uncertainty related to the task and different k-space artefacts.
We show that our uncertainty predictions provide a better estimate of MRI quality from the point of view of the task.
arXiv Detail & Related papers (2021-09-06T12:54:44Z) - A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image
Quality [3.5480752735999417]
Quality control (QC) of medical images is essential to ensure that downstream analyses such as segmentation can be performed successfully.
We aim to automate the process by formulating a probabilistic network that estimates uncertainty through a heteroscedastic noise model.
We show models trained with simulated artefacts provide informative measures of uncertainty on real-world images and we validate our uncertainty predictions on problematic images identified by human-raters.
arXiv Detail & Related papers (2020-01-31T16:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.