MESED: A Multi-modal Entity Set Expansion Dataset with Fine-grained
Semantic Classes and Hard Negative Entities
- URL: http://arxiv.org/abs/2307.14878v1
- Date: Thu, 27 Jul 2023 14:09:59 GMT
- Title: MESED: A Multi-modal Entity Set Expansion Dataset with Fine-grained
Semantic Classes and Hard Negative Entities
- Authors: Yangning Li, Tingwei Lu, Yinghui Li, Tianyu Yu, Shulin Huang, Hai-Tao
Zheng, Rui Zhang, Jun Yuan
- Abstract summary: We propose Multi-modal Entity Set Expansion (MESE), where models integrate information from multiple modalities to represent entities.
A powerful multi-modal model MultiExpan is proposed which is pre-trained on four multimodal pre-training tasks.
The MESED dataset is the first multi-modal dataset for ESE with large-scale and elaborate manual calibration.
- Score: 25.059177235004952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Entity Set Expansion (ESE) task aims to expand a handful of seed entities
with new entities belonging to the same semantic class. Conventional ESE
methods are based on mono-modality (i.e., literal modality), which struggle to
deal with complex entities in the real world such as: (1) Negative entities
with fine-grained semantic differences. (2) Synonymous entities. (3) Polysemous
entities. (4) Long-tailed entities. These challenges prompt us to propose
Multi-modal Entity Set Expansion (MESE), where models integrate information
from multiple modalities to represent entities. Intuitively, the benefits of
multi-modal information for ESE are threefold: (1) Different modalities can
provide complementary information. (2) Multi-modal information provides a
unified signal via common visual properties for the same semantic class or
entity. (3) Multi-modal information offers robust alignment signal for
synonymous entities. To assess the performance of model in MESE and facilitate
further research, we constructed the MESED dataset which is the first
multi-modal dataset for ESE with large-scale and elaborate manual calibration.
A powerful multi-modal model MultiExpan is proposed which is pre-trained on
four multimodal pre-training tasks. The extensive experiments and analyses on
MESED demonstrate the high quality of the dataset and the effectiveness of our
MultiExpan, as well as pointing the direction for future research.
Related papers
- Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts [54.529880848937104]
We develop a unified MLLM with the MoE architecture, named Uni-MoE, that can handle a wide array of modalities.
Specifically, it features modality-specific encoders with connectors for a unified multimodal representation.
We evaluate the instruction-tuned Uni-MoE on a comprehensive set of multimodal datasets.
arXiv Detail & Related papers (2024-05-18T12:16:01Z) - Leveraging Intra-modal and Inter-modal Interaction for Multi-Modal Entity Alignment [27.28214706269035]
Multi-modal entity alignment (MMEA) aims to identify equivalent entity pairs across different multi-modal knowledge graphs (MMKGs)
In this paper, we propose a Multi-Grained Interaction framework for Multi-Modal Entity alignment.
arXiv Detail & Related papers (2024-04-19T08:43:11Z) - NativE: Multi-modal Knowledge Graph Completion in the Wild [51.80447197290866]
We propose a comprehensive framework NativE to achieve MMKGC in the wild.
NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities.
We construct a new benchmark called WildKGC with five datasets to evaluate our method.
arXiv Detail & Related papers (2024-03-28T03:04:00Z) - Unified Multi-modal Unsupervised Representation Learning for
Skeleton-based Action Understanding [62.70450216120704]
Unsupervised pre-training has shown great success in skeleton-based action understanding.
We propose a Unified Multimodal Unsupervised Representation Learning framework, called UmURL.
UmURL exploits an efficient early-fusion strategy to jointly encode the multi-modal features in a single-stream manner.
arXiv Detail & Related papers (2023-11-06T13:56:57Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - Factorized Contrastive Learning: Going Beyond Multi-view Redundancy [116.25342513407173]
This paper proposes FactorCL, a new multimodal representation learning method to go beyond multi-view redundancy.
On large-scale real-world datasets, FactorCL captures both shared and unique information and achieves state-of-the-art results.
arXiv Detail & Related papers (2023-06-08T15:17:04Z) - Multi-modal Contrastive Representation Learning for Entity Alignment [57.92705405276161]
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs.
We propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model.
In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions.
arXiv Detail & Related papers (2022-09-02T08:59:57Z) - Multimodal Representations Learning Based on Mutual Information
Maximization and Minimization and Identity Embedding for Multimodal Sentiment
Analysis [33.73730195500633]
We propose a multimodal representation model based on Mutual information Maximization and Identity Embedding.
Experimental results on two public datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-01-10T01:41:39Z) - Learning Modality-Specific Representations with Self-Supervised
Multi-Task Learning for Multimodal Sentiment Analysis [11.368438990334397]
We develop a self-supervised learning strategy to acquire independent unimodal supervisions.
We conduct extensive experiments on three public multimodal baseline datasets.
Our method achieves comparable performance than human-annotated unimodal labels.
arXiv Detail & Related papers (2021-02-09T14:05:02Z) - Adversarial Multimodal Representation Learning for Click-Through Rate
Prediction [16.10640369157054]
We propose a novel Multimodal Adversarial Representation Network (MARN) for the Click-Through Rate (CTR) prediction task.
A multimodal attention network first calculates the weights of multiple modalities for each item according to its modality-specific features.
A multimodal adversarial network learns modality-in representations where a double-discriminators strategy is introduced.
arXiv Detail & Related papers (2020-03-07T15:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.