Deep Contrastive Graph Representation via Adaptive Homotopy Learning
- URL: http://arxiv.org/abs/2106.09244v1
- Date: Thu, 17 Jun 2021 04:46:04 GMT
- Title: Deep Contrastive Graph Representation via Adaptive Homotopy Learning
- Authors: Rui Zhang, Chengjun Lu, Ziheng Jiao and Xuelong Li
- Abstract summary: Homotopy model is an excellent tool exploited by diverse research works in the field of machine learning.
We propose a novel adaptive homotopy framework (AH) in which the Maclaurin duality is employed.
AH can be widely utilized to enhance the homotopy-based algorithm.
- Score: 76.22904270821778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homotopy model is an excellent tool exploited by diverse research works in
the field of machine learning. However, its flexibility is limited due to lack
of adaptiveness, i.e., manual fixing or tuning the appropriate homotopy
coefficients. To address the problem above, we propose a novel adaptive
homotopy framework (AH) in which the Maclaurin duality is employed, such that
the homotopy parameters can be adaptively obtained. Accordingly, the proposed
AH can be widely utilized to enhance the homotopy-based algorithm. In
particular, in this paper, we apply AH to contrastive learning (AHCL) such that
it can be effectively transferred from weak-supervised learning (given label
priori) to unsupervised learning, where soft labels of contrastive learning are
directly and adaptively learned. Accordingly, AHCL has the adaptive ability to
extract deep features without any sort of prior information. Consequently, the
affinity matrix formulated by the related adaptive labels can be constructed as
the deep Laplacian graph that incorporates the topology of deep representations
for the inputs. Eventually, extensive experiments on benchmark datasets
validate the superiority of our method.
Related papers
- When Heterophily Meets Heterogeneous Graphs: Latent Graphs Guided Unsupervised Representation Learning [6.2167203720326025]
Unsupervised heterogeneous graph representation learning (UHGRL) has gained increasing attention due to its significance in handling practical graphs without labels.
We define semantic heterophily and propose an innovative framework called Latent Graphs Guided Unsupervised Representation Learning (LatGRL) to handle this problem.
arXiv Detail & Related papers (2024-09-01T10:25:06Z) - Generation is better than Modification: Combating High Class Homophily Variance in Graph Anomaly Detection [51.11833609431406]
Homophily distribution differences between different classes are significantly greater than those in homophilic and heterophilic graphs.
We introduce a new metric called Class Homophily Variance, which quantitatively describes this phenomenon.
To mitigate its impact, we propose a novel GNN model named Homophily Edge Generation Graph Neural Network (HedGe)
arXiv Detail & Related papers (2024-03-15T14:26:53Z) - Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models [10.66048003460524]
This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues.
The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
It enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon.
arXiv Detail & Related papers (2023-03-28T15:59:12Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - An Adaptive Alternating-direction-method-based Nonnegative Latent Factor
Model [2.857044909410376]
An alternating-direction-method-based nonnegative latent factor model can perform efficient representation learning to a high-dimensional and incomplete (HDI) matrix.
This paper proposes an Adaptive Alternating-direction-method-based Nonnegative Latent Factor model, whose hyper- parameter adaptation is implemented following the principle of particle swarm optimization.
Empirical studies on nonnegative HDI matrices generated by industrial applications indicate that A2NLF outperforms several state-of-the-art models in terms of computational and storage efficiency, as well as maintains highly competitive estimation accuracy for an HDI matrix's missing data
arXiv Detail & Related papers (2022-04-11T03:04:26Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - New Benchmarks for Learning on Non-Homophilous Graphs [20.082182515715182]
We present a series of improved graph datasets with node label relationships that do not satisfy the homophily principle.
We also introduce a new measure of the presence or absence of homophily that is better suited than existing measures in different regimes.
arXiv Detail & Related papers (2021-04-03T13:45:06Z) - Efficient Semantic Image Synthesis via Class-Adaptive Normalization [116.63715955932174]
Class-adaptive normalization (CLADE) is a lightweight but equally-effective variant that is only adaptive to semantic class.
We introduce intra-class positional map encoding calculated from semantic layouts to modulate the normalization parameters of CLADE.
The proposed CLADE can be generalized to different SPADE-based methods while achieving comparable generation quality compared to SPADE.
arXiv Detail & Related papers (2020-12-08T18:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.