Continual Multimodal Knowledge Graph Construction
- URL: http://arxiv.org/abs/2305.08698v3
- Date: Sun, 26 May 2024 16:29:05 GMT
- Title: Continual Multimodal Knowledge Graph Construction
- Authors: Xiang Chen, Jintian Zhang, Xiaohan Wang, Ningyu Zhang, Tongtong Wu, Yuxiang Wang, Yongheng Wang, Huajun Chen,
- Abstract summary: Current Multimodal Knowledge Graph Construction (MKGC) models struggle with the real-world dynamism of continuously emerging entities and relations.
This study introduces benchmarks aimed at fostering the development of the continual MKGC domain.
We introduce MSPT framework, designed to surmount the shortcomings of existing MKGC approaches during multimedia data processing.
- Score: 62.77243705682985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current Multimodal Knowledge Graph Construction (MKGC) models struggle with the real-world dynamism of continuously emerging entities and relations, often succumbing to catastrophic forgetting-loss of previously acquired knowledge. This study introduces benchmarks aimed at fostering the development of the continual MKGC domain. We further introduce MSPT framework, designed to surmount the shortcomings of existing MKGC approaches during multimedia data processing. MSPT harmonizes the retention of learned knowledge (stability) and the integration of new data (plasticity), outperforming current continual learning and multimodal methods. Our results confirm MSPT's superior performance in evolving knowledge environments, showcasing its capacity to navigate balance between stability and plasticity.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Multi-Epoch learning with Data Augmentation for Deep Click-Through Rate Prediction [53.88231294380083]
We introduce a novel Multi-Epoch learning with Data Augmentation (MEDA) framework, suitable for both non-continual and continual learning scenarios.
MEDA minimizes overfitting by reducing the dependency of the embedding layer on subsequent training data.
Our findings confirm that pre-trained layers can adapt to new embedding spaces, enhancing performance without overfitting.
arXiv Detail & Related papers (2024-06-27T04:00:15Z) - Neuro-Inspired Information-Theoretic Hierarchical Perception for Multimodal Learning [16.8379583872582]
We develop the Information-Theoretic Hierarchical Perception (ITHP) model, which utilizes the concept of information bottleneck.
We show that ITHP consistently distills crucial information in multimodal learning scenarios, outperforming state-of-the-art benchmarks.
arXiv Detail & Related papers (2024-04-15T01:34:44Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - MACO: A Modality Adversarial and Contrastive Framework for
Modality-missing Multi-modal Knowledge Graph Completion [18.188971531961663]
We propose a modality adversarial and contrastive framework (MACO) to solve the modality-missing problem in MMKGC.
MACO trains a generator and discriminator adversarially to generate missing modality features that can be incorporated into the MMKGC model.
arXiv Detail & Related papers (2023-08-13T06:29:38Z) - VERITE: A Robust Benchmark for Multimodal Misinformation Detection
Accounting for Unimodal Bias [17.107961913114778]
multimodal misinformation is a growing problem on social media platforms.
In this study, we investigate and identify the presence of unimodal bias in widely-used MMD benchmarks.
We introduce a new method -- termed Crossmodal HArd Synthetic MisAlignment (CHASMA) -- for generating realistic synthetic training data.
arXiv Detail & Related papers (2023-04-27T12:28:29Z) - New Insights for the Stability-Plasticity Dilemma in Online Continual
Learning [21.664470275289407]
We propose an online continual learning framework named multi-scale feature adaptation network (MuFAN)
MuFAN outperforms other state-of-the-art continual learning methods on the SVHN, CIFAR100, miniImageNet, and CORe50 datasets.
arXiv Detail & Related papers (2023-02-17T07:43:59Z) - Online Continual Learning via the Meta-learning Update with Multi-scale
Knowledge Distillation and Data Augmentation [4.109784267309124]
Continual learning aims to rapidly and continually learn the current task from a sequence of tasks.
One common limitation of this method is the data imbalance between the previous and current tasks.
We propose a novel framework called Meta-learning update via Multi-scale Knowledge Distillation and Data Augmentation.
arXiv Detail & Related papers (2022-09-12T10:03:53Z) - A Unified Continuous Learning Framework for Multi-modal Knowledge
Discovery and Pre-training [73.7507857547549]
We propose to unify knowledge discovery and multi-modal pre-training in a continuous learning framework.
For knowledge discovery, a pre-trained model is used to identify cross-modal links on a graph.
For model pre-training, the knowledge graph is used as the external knowledge to guide the model updating.
arXiv Detail & Related papers (2022-06-11T16:05:06Z) - DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention
Network [48.38954651216983]
We propose a novel Disentangled Knowledge Graph Attention Network (DisenKGAT) for Knowledge graphs.
DisenKGAT uses both micro-disentanglement and macro-disentanglement to exploit representations behind Knowledge graphs.
Our work has strong robustness and flexibility to adapt to various score functions.
arXiv Detail & Related papers (2021-08-22T04:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.