Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification
- URL: http://arxiv.org/abs/2208.00967v1
- Date: Mon, 1 Aug 2022 16:15:31 GMT
- Title: Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification
- Authors: Xulin Li, Yan Lu, Bin Liu, Yating Liu, Guojun Yin, Qi Chu, Jinyang
Huang, Feng Zhu, Rui Zhao, Nenghai Yu
- Abstract summary: We find graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues.
The well-trained input features weaken the learning of graph topology, making it not generalized enough during the inference process.
We propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems.
- Score: 69.45543438974963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph-based models have achieved great success in person re-identification
tasks recently, which compute the graph topology structure (affinities) among
different people first and then pass the information across them to achieve
stronger features. But we find existing graph-based methods in the
visible-infrared person re-identification task (VI-ReID) suffer from bad
generalization because of two issues: 1) train-test modality balance gap, which
is a property of VI-ReID task. The number of two modalities data are balanced
in the training stage, but extremely unbalanced in inference, causing the low
generalization of graph-based VI-ReID methods. 2) sub-optimal topology
structure caused by the end-to-end learning manner to the graph module. We
analyze that the well-trained input features weaken the learning of graph
topology, making it not generalized enough during the inference process. In
this paper, we propose a Counterfactual Intervention Feature Transfer (CIFT)
method to tackle these problems. Specifically, a Homogeneous and Heterogeneous
Feature Transfer (H2FT) is designed to reduce the train-test modality balance
gap by two independent types of well-designed graph modules and an unbalanced
scenario simulation. Besides, a Counterfactual Relation Intervention (CRI) is
proposed to utilize the counterfactual intervention and causal effect tools to
highlight the role of topology structure in the whole training process, which
makes the graph topology structure more reliable. Extensive experiments on
standard VI-ReID benchmarks demonstrate that CIFT outperforms the
state-of-the-art methods under various settings.
Related papers
- Self-Supervised Conditional Distribution Learning on Graphs [15.730933577970687]
We present an end-to-end graph representation learning model to align the conditional distributions of weakly and strongly augmented features over the original features.
This alignment effectively reduces the risk of disrupting intrinsic semantic information through graph-structured data augmentation.
arXiv Detail & Related papers (2024-11-20T07:26:36Z) - Understanding the Effect of GCN Convolutions in Regression Tasks [8.299692647308323]
Graph Convolutional Networks (GCNs) have become a pivotal method in machine learning for modeling functions over graphs.
This paper provides a formal analysis of the impact of convolution operators on regression tasks over homophilic networks.
arXiv Detail & Related papers (2024-10-26T04:19:52Z) - Adapting to Change: Robust Counterfactual Explanations in Dynamic Data
Landscapes [9.943459106509687]
We introduce a novel semi-supervised Graph Counterfactual Explainer (GCE) methodology, Dynamic GRAph Counterfactual Explainer (DyGRACE)
It leverages initial knowledge about the data distribution to search for valid counterfactuals while avoiding using information from potentially outdated decision functions in subsequent time steps.
DyGRACE is quite effective and can act as a drift detector, identifying distributional drift based on differences in reconstruction errors between iterations.
arXiv Detail & Related papers (2023-08-04T14:41:03Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Dynamic Graph Representation Learning via Graph Transformer Networks [41.570839291138114]
We propose a Transformer-based dynamic graph learning method named Dynamic Graph Transformer (DGT)
DGT has spatial-temporal encoding to effectively learn graph topology and capture implicit links.
We show that DGT presents superior performance compared with several state-of-the-art baselines.
arXiv Detail & Related papers (2021-11-19T21:44:23Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.