Hierarchical Knowledge Distillation on Text Graph for Data-limited
Attribute Inference
- URL: http://arxiv.org/abs/2401.06802v1
- Date: Wed, 10 Jan 2024 05:50:34 GMT
- Title: Hierarchical Knowledge Distillation on Text Graph for Data-limited
Attribute Inference
- Authors: Quan Li, Shixiong Jing, Lingwei Chen
- Abstract summary: We develop a text-graph-based few-shot learning model for attribute inferences on social media text data.
Our model first constructs and refines a text graph using manifold learning and message passing.
To further use cross-domain texts and unlabeled texts to improve few-shot performance, a hierarchical knowledge distillation is devised over text graph.
- Score: 5.618638372635474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The popularization of social media increases user engagements and generates a
large amount of user-oriented data. Among them, text data (e.g., tweets, blogs)
significantly attracts researchers and speculators to infer user attributes
(e.g., age, gender, location) for fulfilling their intents. Generally, this
line of work casts attribute inference as a text classification problem, and
starts to leverage graph neural networks (GNNs) to utilize higher-level
representations of source texts. However, these text graphs are constructed
over words, suffering from high memory consumption and ineffectiveness on few
labeled texts. To address this challenge, we design a text-graph-based few-shot
learning model for attribute inferences on social media text data. Our model
first constructs and refines a text graph using manifold learning and message
passing, which offers a better trade-off between expressiveness and complexity.
Afterwards, to further use cross-domain texts and unlabeled texts to improve
few-shot performance, a hierarchical knowledge distillation is devised over
text graph to optimize the problem, which derives better text representations,
and advances model generalization ability. Experiments on social media datasets
demonstrate the state-of-the-art performance of our model on attribute
inferences with considerably fewer labeled texts.
Related papers
- GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models [33.3678293782131]
This work studies self-supervised graph learning for text-attributed graphs (TAGs)
We aim to improve view generation through language supervision.
This is driven by the prevalence of textual attributes in real applications, which complement graph structures with rich semantic information.
arXiv Detail & Related papers (2024-06-17T17:49:19Z) - Empower Text-Attributed Graphs Learning with Large Language Models
(LLMs) [5.920353954082262]
We propose a plug-and-play approach to empower text-attributed graphs through node generation using Large Language Models (LLMs)
We employ an edge predictor to capture the structural information inherent in the raw dataset and integrate the newly generated samples into the original graph.
Experiments demonstrate the outstanding performance of our proposed paradigm, particularly in low-shot scenarios.
arXiv Detail & Related papers (2023-10-15T16:04:28Z) - Learning Multiplex Representations on Text-Attributed Graphs with One Language Model Encoder [55.24276913049635]
We propose METAG, a new framework for learning Multiplex rEpresentations on Text-Attributed Graphs.
In contrast to existing methods, METAG uses one text encoder to model the shared knowledge across relations.
We conduct experiments on nine downstream tasks in five graphs from both academic and e-commerce domains.
arXiv Detail & Related papers (2023-10-10T14:59:22Z) - Using Large Language Models for Zero-Shot Natural Language Generation
from Knowledge Graphs [4.56877715768796]
We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge.
We also show that there is a significant connection between what the LLM already knows about the data it is parsing and the quality of the output text.
arXiv Detail & Related papers (2023-07-14T12:45:03Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - Improving Graph-Based Text Representations with Character and Word Level
N-grams [30.699644290131044]
We propose a new word-character text graph that combines word and character n-gram nodes together with document nodes.
We also propose two new graph-based neural models, WCTextGCN and WCTextGAT, for modeling our proposed text graph.
arXiv Detail & Related papers (2022-10-12T08:07:54Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.