Exploring Graph-aware Multi-View Fusion for Rumor Detection on Social
Media
- URL: http://arxiv.org/abs/2212.02419v1
- Date: Tue, 8 Nov 2022 13:27:43 GMT
- Title: Exploring Graph-aware Multi-View Fusion for Rumor Detection on Social
Media
- Authors: Yang Wu, Jing Yang, Xiaojun Zhou, Liming Wang, Zhen Xu
- Abstract summary: We propose a novel multi-view fusion framework for rumor representation learning and classification.
It encodes the multiple views based on Graph Convolutional Networks (GCN) and leverages Convolutional Neural Networks (CNN)
Experimental results on two public datasets demonstrate that our method outperforms state-of-the-art approaches.
- Score: 23.231289922442414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic detecting rumors on social media has become a challenging task.
Previous studies focus on learning indicative clues from conversation threads
for identifying rumorous information. However, these methods only model
rumorous conversation threads from various views but fail to fuse multi-view
features very well. In this paper, we propose a novel multi-view fusion
framework for rumor representation learning and classification. It encodes the
multiple views based on Graph Convolutional Networks (GCN), and leverages
Convolutional Neural Networks (CNN) to capture the consistent and complementary
information among all views and fuse them together. Experimental results on two
public datasets demonstrate that our method outperforms state-of-the-art
approaches.
Related papers
- Adaptive Confidence Multi-View Hashing for Multimedia Retrieval [23.018331993442285]
Multi-view hash method converts heterogeneous data from multiple views into binary hash codes.
To conduct the confidence learning and eliminate unnecessary noise, we propose a novel Adaptive Confidence Multi-View Hashing (ACMVH) method.
arXiv Detail & Related papers (2023-12-12T14:43:09Z) - Inconsistent Matters: A Knowledge-guided Dual-consistency Network for
Multi-modal Rumor Detection [53.48346699224921]
A novel Knowledge-guided Dualconsistency Network is proposed to detect rumors with multimedia contents.
It uses two consistency detectionworks to capture the inconsistency at the cross-modal level and the content-knowledge level simultaneously.
It also enables robust multi-modal representation learning under different missing visual modality conditions.
arXiv Detail & Related papers (2023-06-03T15:32:20Z) - Multimodal Short Video Rumor Detection System Based on Contrastive
Learning [3.4192832062683842]
Short video platforms in China have gradually evolved into fertile grounds for the proliferation of fake news.
distinguishing short video rumors poses a significant challenge due to the substantial amount of information and shared features.
Our research group proposes a methodology encompassing multimodal feature fusion and the integration of external knowledge.
arXiv Detail & Related papers (2023-04-17T16:07:00Z) - Semantic Prompt for Few-Shot Image Recognition [76.68959583129335]
We propose a novel Semantic Prompt (SP) approach for few-shot learning.
The proposed approach achieves promising results, improving the 1-shot learning accuracy by 3.67% on average.
arXiv Detail & Related papers (2023-03-24T16:32:19Z) - Learnable Graph Convolutional Network and Feature Fusion for Multi-view
Learning [30.74535386745822]
This paper proposes a joint deep learning framework called Learnable Graph Convolutional Network and Feature Fusion (LGCN-FF)
It consists of two stages: feature fusion network and learnable graph convolutional network.
The proposed LGCN-FF is validated to be superior to various state-of-the-art methods in multi-view semi-supervised classification.
arXiv Detail & Related papers (2022-11-16T19:07:12Z) - Multimodal Fake News Detection with Adaptive Unimodal Representation
Aggregation [28.564442206829625]
AURA is a multimodal fake news detection network with adaptive unimodal representation aggregation.
We perform coarse-level fake news detection and cross-modal cosistency learning according to the unimodal and multimodal representations.
Experiments on Weibo and Gossipcop prove that AURA can successfully beat several state-of-the-art FND schemes.
arXiv Detail & Related papers (2022-06-12T14:06:55Z) - Multi-Modal Few-Shot Object Detection with Meta-Learning-Based
Cross-Modal Prompting [77.69172089359606]
We study multi-modal few-shot object detection (FSOD) in this paper, using both few-shot visual examples and class semantic information for detection.
Our approach is motivated by the high-level conceptual similarity of (metric-based) meta-learning and prompt-based learning.
We comprehensively evaluate the proposed multi-modal FSOD models on multiple few-shot object detection benchmarks, achieving promising results.
arXiv Detail & Related papers (2022-04-16T16:45:06Z) - Audio-Visual Fusion Layers for Event Type Aware Video Recognition [86.22811405685681]
We propose a new model to address the multisensory integration problem with individual event-specific layers in a multi-task learning scheme.
We show that our network is formulated with single labels, but it can output additional true multi-labels to represent the given videos.
arXiv Detail & Related papers (2022-02-12T02:56:22Z) - Deep Contrastive Learning for Multi-View Network Embedding [20.035449838566503]
Multi-view network embedding aims at projecting nodes in the network to low-dimensional vectors.
Most contrastive learning-based methods mostly rely on high-quality graph embedding.
We design a novel node-to-node Contrastive learning framework for Multi-view network Embedding (CREME)
arXiv Detail & Related papers (2021-08-16T06:29:18Z) - Encoder Fusion Network with Co-Attention Embedding for Referring Image
Segmentation [87.01669173673288]
We propose an encoder fusion network (EFN), which transforms the visual encoder into a multi-modal feature learning network.
A co-attention mechanism is embedded in the EFN to realize the parallel update of multi-modal features.
The experiment results on four benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-05-05T02:27:25Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.