Contrastive Multi-graph Learning with Neighbor Hierarchical Sifting for Semi-supervised Text Classification
- URL: http://arxiv.org/abs/2411.16787v1
- Date: Mon, 25 Nov 2024 08:35:55 GMT
- Title: Contrastive Multi-graph Learning with Neighbor Hierarchical Sifting for Semi-supervised Text Classification
- Authors: Wei Ai, Jianbin Li, Ze Wang, Yingying Wei, Tao Meng, Yuntao Shou, Keqin Lib,
- Abstract summary: We propose a novel method of contrastive multi-graph learning with neighbor hierarchical sifting for semi-supervised text classification.
Specifically, we exploit core features to form a multi-relational text graph, enhancing semantic connections among texts.
Our experiments on ThuCNews, SogouNews, 20 Newsgroups, and Ohsumed datasets achieved 95.86%, 97.52%, 87.43%, and 70.65%, which demonstrates competitive results in semi-supervised text classification.
- Score: 16.75801747622402
- License:
- Abstract: Graph contrastive learning has been successfully applied in text classification due to its remarkable ability for self-supervised node representation learning. However, explicit graph augmentations may lead to a loss of semantics in the contrastive views. Secondly, existing methods tend to overlook edge features and the varying significance of node features during multi-graph learning. Moreover, the contrastive loss suffer from false negatives. To address these limitations, we propose a novel method of contrastive multi-graph learning with neighbor hierarchical sifting for semi-supervised text classification, namely ConNHS. Specifically, we exploit core features to form a multi-relational text graph, enhancing semantic connections among texts. By separating text graphs, we provide diverse views for contrastive learning. Our approach ensures optimal preservation of the graph information, minimizing data loss and distortion. Then, we separately execute relation-aware propagation and cross-graph attention propagation, which effectively leverages the varying correlations between nodes and edge features while harmonising the information fusion across graphs. Subsequently, we present the neighbor hierarchical sifting loss (NHS) to refine the negative selection. For one thing, following the homophily assumption, NHS masks first-order neighbors of the anchor and positives from being negatives. For another, NHS excludes the high-order neighbors analogous to the anchor based on their similarities. Consequently, it effectively reduces the occurrence of false negatives, preventing the expansion of the distance between similar samples in the embedding space. Our experiments on ThuCNews, SogouNews, 20 Newsgroups, and Ohsumed datasets achieved 95.86\%, 97.52\%, 87.43\%, and 70.65\%, which demonstrates competitive results in semi-supervised text classification.
Related papers
- Self-Supervised Conditional Distribution Learning on Graphs [15.730933577970687]
We present an end-to-end graph representation learning model to align the conditional distributions of weakly and strongly augmented features over the original features.
This alignment effectively reduces the risk of disrupting intrinsic semantic information through graph-structured data augmentation.
arXiv Detail & Related papers (2024-11-20T07:26:36Z) - Bootstrap Latents of Nodes and Neighbors for Graph Self-Supervised Learning [27.278097015083343]
Contrastive learning requires negative samples to prevent model collapse and learn discriminative representations.
We introduce a cross-attention module to predict the supportiveness score of a neighbor with respect to the anchor node.
Our method mitigates class collision from negative and noisy positive samples, concurrently enhancing intra-class compactness.
arXiv Detail & Related papers (2024-08-09T14:17:52Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - Capturing Fine-grained Semantics in Contrastive Graph Representation
Learning [23.861016307326146]
Graph contrastive learning defines a contrastive task to pull similar instances close and push dissimilar instances away.
Existing methods of graph contrastive learning ignore the differences between diverse semantics existed in graphs.
We propose a novel Fine-grained Semantics enhanced Graph Contrastive Learning (FSGCL) in this paper.
arXiv Detail & Related papers (2023-04-23T14:05:05Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Improving Signed Propagation for Graph Neural Networks in Multi-Class Environments [3.4498722449655066]
We introduce two novel strategies for improving signed propagation under multi-class graphs.
The proposed scheme combines calibration to secure robustness while reducing uncertainty.
We show the efficacy of our theorem through extensive experiments on six benchmark graph datasets.
arXiv Detail & Related papers (2023-01-21T08:47:22Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.