Knowledge Graph Embedding with Atrous Convolution and Residual Learning
- URL: http://arxiv.org/abs/2010.12121v2
- Date: Fri, 30 Oct 2020 06:07:38 GMT
- Title: Knowledge Graph Embedding with Atrous Convolution and Residual Learning
- Authors: Feiliang Ren, Juchen Li, Huihui Zhang, Shilei Liu, Bochao Li, Ruicheng
Ming, Yujia Bai
- Abstract summary: We propose a simple but effective atrous convolution based knowledge graph embedding method.
It effectively increases feature interactions by using atrous convolutions.
It addresses the original information forgotten issue and vanishing/exploding gradient issue.
- Score: 4.582412257655891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph embedding is an important task and it will benefit lots of
downstream applications. Currently, deep neural networks based methods achieve
state-of-the-art performance. However, most of these existing methods are very
complex and need much time for training and inference. To address this issue,
we propose a simple but effective atrous convolution based knowledge graph
embedding method. Compared with existing state-of-the-art methods, our method
has following main characteristics. First, it effectively increases feature
interactions by using atrous convolutions. Second, to address the original
information forgotten issue and vanishing/exploding gradient issue, it uses the
residual learning method. Third, it has simpler structure but much higher
parameter efficiency. We evaluate our method on six benchmark datasets with
different evaluation metrics. Extensive experiments show that our model is very
effective. On these diverse datasets, it achieves better results than the
compared state-of-the-art methods on most of evaluation metrics. The source
codes of our model could be found at https://github.com/neukg/AcrE.
Related papers
- $V_kD:$ Improving Knowledge Distillation using Orthogonal Projections [36.27954884906034]
Knowledge distillation is an effective method for training small and efficient deep learning models.
However, the efficacy of a single method can degenerate when transferring to other tasks, modalities, or other architectures.
We propose a novel constrained feature distillation method to address this limitation.
arXiv Detail & Related papers (2024-03-10T13:26:24Z) - MoBYv2AL: Self-supervised Active Learning for Image Classification [57.4372176671293]
We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
arXiv Detail & Related papers (2023-01-04T10:52:02Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Finding Significant Features for Few-Shot Learning using Dimensionality
Reduction [0.0]
This module helps to improve the accuracy performance by allowing the similarity function, given by the metric learning method, to have more discriminative features for the classification.
Our method outperforms the metric learning baselines in the miniImageNet dataset by around 2% in accuracy performance.
arXiv Detail & Related papers (2021-07-06T16:36:57Z) - Graph Convolution for Re-ranking in Person Re-identification [40.9727538382413]
We propose a graph-based re-ranking method to improve learned features while still keeping Euclidean distance as the similarity metric.
A simple yet effective method is proposed to generate a profile vector for each tracklet in videos, which helps extend our method to video re-ID.
arXiv Detail & Related papers (2021-07-05T18:40:43Z) - Revisiting Point Cloud Shape Classification with a Simple and Effective
Baseline [111.3236030935478]
We find that auxiliary factors like different evaluation schemes, data augmentation strategies, and loss functions make a large difference in performance.
A projection-based method, which we refer to as SimpleView, performs surprisingly well.
It achieves on par or better results than sophisticated state-of-the-art methods on ModelNet40 while being half the size of PointNet++.
arXiv Detail & Related papers (2021-06-09T18:01:11Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - PK-GCN: Prior Knowledge Assisted Image Classification using Graph
Convolution Networks [3.4129083593356433]
Similarity between classes can influence the performance of classification.
We propose a method that incorporates class similarity knowledge into convolutional neural networks models.
Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.
arXiv Detail & Related papers (2020-09-24T18:31:35Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.