SoTCKGE:Continual Knowledge Graph Embedding Based on Spatial Offset Transformation
- URL: http://arxiv.org/abs/2503.08189v1
- Date: Tue, 11 Mar 2025 08:54:03 GMT
- Title: SoTCKGE:Continual Knowledge Graph Embedding Based on Spatial Offset Transformation
- Authors: Xinyan Wang, Jinshuo Liu, Cheng Bi, Kaijian Xie, Meng Wang, Juan Deng, Jeff Pan,
- Abstract summary: Current Continual Knowledge Graph Embedding (CKGE) methods rely on translation-based embedding methods.<n>We propose a novel CKGE framework grounded in Spatial Offset Transformation vectors.<n>We introduce a hierarchical update strategy and a balanced embedding method to refine the parameter update process.
- Score: 7.706481522285466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current Continual Knowledge Graph Embedding (CKGE) methods primarily rely on translation-based embedding methods, leveraging previously acquired knowledge to initialize new facts. To enhance learning efficiency, these methods often integrate fine-tuning or continual learning strategies. However, this compromises the model's prediction accuracy and the translation-based methods lack support for complex relational structures (multi-hop relations). To tackle this challenge, we propose a novel CKGE framework SoTCKGE grounded in Spatial Offset Transformation. Within this framework, entity positions are defined as being jointly determined by base position vectors and offset vectors. This not only enhances the model's ability to represent complex relational structures but also allows for the embedding update of both new and old knowledge through simple spatial offset transformations, without the need for continuous learning methods. Furthermore, we introduce a hierarchical update strategy and a balanced embedding method to refine the parameter update process, effectively minimizing training costs and augmenting model accuracy. To comprehensively assess the performance of our model, we have conducted extensive experimlents on four publicly accessible datasets and a new dataset constructed by us. Experimental results demonstrate the advantage of our model in enhancing multi-hop relationship learning and further improving prediction accuracy.
Related papers
- Enhancing knowledge retention for continual learning with domain-specific adapters and features gating [4.637185817866919]
Continual learning empowers models to learn from a continuous stream of data while preserving previously acquired knowledge.
We propose a new approach that integrates adapters within the self-attention mechanisms of Vision Transformers to enhance knowledge retention when sequentially adding datasets from different domains.
arXiv Detail & Related papers (2025-04-11T15:20:08Z) - TabKAN: Advancing Tabular Data Analysis using Kolmograv-Arnold Network [11.664880068737084]
This paper introduces TabKAN, a novel framework that advances tabular data modeling using Kolmogorov-Arnold Networks (KANs)
KANs leverage learnable activation functions on edges, enhancing both interpretability and training efficiency.
Through extensive benchmarking on diverse public datasets, TabKAN demonstrates superior performance in supervised learning while significantly outperforming classical and Transformer-based models in transfer learning scenarios.
arXiv Detail & Related papers (2025-04-09T03:46:10Z) - In-context learning of evolving data streams with tabular foundational models [42.13420474990124]
This work bridges advancements from both areas, highlighting how transformers' implicit meta-learning abilities, pre-training on drifting natural data, and reliance on context optimization directly address the core challenges of adaptive learning in dynamic environments.<n> Exploring real-time model adaptation, this research demonstrates that TabPFN, coupled with a simple sliding memory strategy, consistently outperforms ensembles of Hoeffding trees across all non-stationary benchmarks.
arXiv Detail & Related papers (2025-02-24T04:52:35Z) - Towards Compatible Fine-tuning for Vision-Language Model Updates [114.25776195225494]
Class-conditioned Context Optimization (ContCoOp) integrates learnable prompts with class embeddings using an attention layer before inputting them into the text encoder.
Our experiments over 15 datasets show that our ContCoOp achieves the highest compatibility over the baseline methods, and exhibits robust out-of-distribution generalization.
arXiv Detail & Related papers (2024-12-30T12:06:27Z) - Enhancing binary classification: A new stacking method via leveraging computational geometry [5.906199156511947]
This paper introduces a novel approach that integrates computational geometry techniques, specifically solving the maximum weighted rectangle problem, to develop a new meta-model for binary classification.
Our method is evaluated on multiple open datasets, with statistical analysis showing its stability and demonstrating improvements in accuracy.
Our method is highly applicable not only in stacking ensemble learning but also in various real-world applications, such as hospital health evaluation scoring and bank credit scoring systems.
arXiv Detail & Related papers (2024-10-30T06:11:08Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Overcoming Generic Knowledge Loss with Selective Parameter Update [48.240683797965005]
We propose a novel approach to continuously update foundation models.
Instead of updating all parameters equally, we localize the updates to a sparse set of parameters relevant to the task being learned.
Our method achieves improvements on the accuracy of the newly learned tasks up to 7% while preserving the pretraining knowledge with a negligible decrease of 0.9% on a representative control set accuracy.
arXiv Detail & Related papers (2023-08-23T22:55:45Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Class-Incremental Learning by Knowledge Distillation with Adaptive
Feature Consolidation [39.97128550414934]
We present a novel class incremental learning approach based on deep neural networks.
It continually learns new tasks with limited memory for storing examples in the previous tasks.
Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models.
arXiv Detail & Related papers (2022-04-02T16:30:04Z) - Learning Augmentation Distributions using Transformed Risk Minimization [47.236227685707526]
We propose a new emphTransformed Risk Minimization (TRM) framework as an extension of classical risk minimization.
As a key application, we focus on learning augmentations to improve classification performance with a given class of predictors.
arXiv Detail & Related papers (2021-11-16T02:07:20Z) - An EM Framework for Online Incremental Learning of Semantic Segmentation [37.94734474090863]
We propose an incremental learning strategy that can adapt deep segmentation models without catastrophic forgetting, using a streaming input data with pixel annotations on the novel classes only.
We validate our approach on the PASCAL VOC 2012 and ADE20K datasets, and the results demonstrate its superior performance over the existing incremental methods.
arXiv Detail & Related papers (2021-08-08T11:30:09Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Understanding the Mechanics of SPIGOT: Surrogate Gradients for Latent
Structure Learning [20.506232306308977]
Latent structure models are a powerful tool for modeling language data.
One challenge with end-to-end training of these models is the argmax operation, which has null gradient.
We explore latent structure learning through the angle of pulling back the downstream learning objective.
arXiv Detail & Related papers (2020-10-05T21:56:00Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.