MACO: A Modality Adversarial and Contrastive Framework for
Modality-missing Multi-modal Knowledge Graph Completion
- URL: http://arxiv.org/abs/2308.06696v1
- Date: Sun, 13 Aug 2023 06:29:38 GMT
- Title: MACO: A Modality Adversarial and Contrastive Framework for
Modality-missing Multi-modal Knowledge Graph Completion
- Authors: Yichi Zhang, Zhuo Chen, Wen Zhang
- Abstract summary: We propose a modality adversarial and contrastive framework (MACO) to solve the modality-missing problem in MMKGC.
MACO trains a generator and discriminator adversarially to generate missing modality features that can be incorporated into the MMKGC model.
- Score: 18.188971531961663
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent years have seen significant advancements in multi-modal knowledge
graph completion (MMKGC). MMKGC enhances knowledge graph completion (KGC) by
integrating multi-modal entity information, thereby facilitating the discovery
of unobserved triples in the large-scale knowledge graphs (KGs). Nevertheless,
existing methods emphasize the design of elegant KGC models to facilitate
modality interaction, neglecting the real-life problem of missing modalities in
KGs. The missing modality information impedes modal interaction, consequently
undermining the model's performance. In this paper, we propose a modality
adversarial and contrastive framework (MACO) to solve the modality-missing
problem in MMKGC. MACO trains a generator and discriminator adversarially to
generate missing modality features that can be incorporated into the MMKGC
model. Meanwhile, we design a cross-modal contrastive loss to improve the
performance of the generator. Experiments on public benchmarks with further
explorations demonstrate that MACO could achieve state-of-the-art results and
serve as a versatile framework to bolster various MMKGC models. Our code and
benchmark data are available at https://github.com/zjukg/MACO.
Related papers
- Transformer-Based Multimodal Knowledge Graph Completion with Link-Aware Contexts [3.531533402602335]
Multimodal knowledge graph completion (MMKGC) aims to predict missing links in multimodal knowledge graphs (MMKGs)
Existing MMKGC approaches primarily extend traditional knowledge graph embedding (KGE) models.
We propose a novel approach that integrates Transformer-based KGE models with cross-modal context generated by pre-trained VLMs.
arXiv Detail & Related papers (2025-01-26T22:23:14Z) - Tokenization, Fusion, and Augmentation: Towards Fine-grained Multi-modal Entity Representation [51.80447197290866]
Multi-modal knowledge graph completion (MMKGC) aims to discover unobserved knowledge from given knowledge graphs.
Existing MMKGC methods usually extract multi-modal features with pre-trained models.
We introduce a novel framework MyGO to tokenize, fuse, and augment the fine-grained multi-modal representations of entities.
arXiv Detail & Related papers (2024-04-15T05:40:41Z) - NativE: Multi-modal Knowledge Graph Completion in the Wild [51.80447197290866]
We propose a comprehensive framework NativE to achieve MMKGC in the wild.
NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities.
We construct a new benchmark called WildKGC with five datasets to evaluate our method.
arXiv Detail & Related papers (2024-03-28T03:04:00Z) - Noise-powered Multi-modal Knowledge Graph Representation Framework [52.95468915728721]
The rise of Multi-modal Pre-training highlights the necessity for a unified Multi-Modal Knowledge Graph representation learning framework.
We propose a novel SNAG method that utilizes a Transformer-based architecture equipped with modality-level noise masking.
Our approach achieves SOTA performance across a total of ten datasets, demonstrating its versatility.
arXiv Detail & Related papers (2024-03-11T15:48:43Z) - Unleashing the Power of Imbalanced Modality Information for Multi-modal
Knowledge Graph Completion [40.86196588992357]
Multi-modal knowledge graph completion (MMKGC) aims to predict the missing triples in the multi-modal knowledge graphs.
We propose Adaptive Multi-modal Fusion and Modality Adversarial Training (AdaMF-MAT) to unleash the power of imbalanced modality information.
Our approach is a co-design of the MMKGC model and training strategy which can outperform 19 recent MMKGC methods.
arXiv Detail & Related papers (2024-02-22T05:48:03Z) - Continual Multimodal Knowledge Graph Construction [62.77243705682985]
Current Multimodal Knowledge Graph Construction (MKGC) models struggle with the real-world dynamism of continuously emerging entities and relations.
This study introduces benchmarks aimed at fostering the development of the continual MKGC domain.
We introduce MSPT framework, designed to surmount the shortcomings of existing MKGC approaches during multimedia data processing.
arXiv Detail & Related papers (2023-05-15T14:58:28Z) - VERITE: A Robust Benchmark for Multimodal Misinformation Detection
Accounting for Unimodal Bias [17.107961913114778]
multimodal misinformation is a growing problem on social media platforms.
In this study, we investigate and identify the presence of unimodal bias in widely-used MMD benchmarks.
We introduce a new method -- termed Crossmodal HArd Synthetic MisAlignment (CHASMA) -- for generating realistic synthetic training data.
arXiv Detail & Related papers (2023-04-27T12:28:29Z) - DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention
Network [48.38954651216983]
We propose a novel Disentangled Knowledge Graph Attention Network (DisenKGAT) for Knowledge graphs.
DisenKGAT uses both micro-disentanglement and macro-disentanglement to exploit representations behind Knowledge graphs.
Our work has strong robustness and flexibility to adapt to various score functions.
arXiv Detail & Related papers (2021-08-22T04:10:35Z) - Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis [96.46952672172021]
Bi-Bimodal Fusion Network (BBFN) is a novel end-to-end network that performs fusion on pairwise modality representations.
Model takes two bimodal pairs as input due to known information imbalance among modalities.
arXiv Detail & Related papers (2021-07-28T23:33:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.