Self-Learning with Rectification Strategy for Human Parsing
- URL: http://arxiv.org/abs/2004.08055v1
- Date: Fri, 17 Apr 2020 03:51:30 GMT
- Title: Self-Learning with Rectification Strategy for Human Parsing
- Authors: Tao Li, Zhiyuan Liang, Sanyuan Zhao, Jiahao Gong, Jianbing Shen
- Abstract summary: We propose a trainable graph reasoning method to correct two typical errors in the pseudo-labels.
The reconstructed features have a stronger ability to represent the topology structure of the human body.
Our method outperforms other state-of-the-art methods in supervised human parsing tasks.
- Score: 73.06197841003048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we solve the sample shortage problem in the human parsing
task. We begin with the self-learning strategy, which generates pseudo-labels
for unlabeled data to retrain the model. However, directly using noisy
pseudo-labels will cause error amplification and accumulation. Considering the
topology structure of human body, we propose a trainable graph reasoning method
that establishes internal structural connections between graph nodes to correct
two typical errors in the pseudo-labels, i.e., the global structural error and
the local consistency error. For the global error, we first transform
category-wise features into a high-level graph model with coarse-grained
structural information, and then decouple the high-level graph to reconstruct
the category features. The reconstructed features have a stronger ability to
represent the topology structure of the human body. Enlarging the receptive
field of features can effectively reducing the local error. We first project
feature pixels into a local graph model to capture pixel-wise relations in a
hierarchical graph manner, then reverse the relation information back to the
pixels. With the global structural and local consistency modules, these errors
are rectified and confident pseudo-labels are generated for retraining.
Extensive experiments on the LIP and the ATR datasets demonstrate the
effectiveness of our global and local rectification modules. Our method
outperforms other state-of-the-art methods in supervised human parsing tasks.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - TopoImb: Toward Topology-level Imbalance in Learning from Graphs [34.25952902469481]
We argue that for graphs, the imbalance is likely to exist at the sub-class topology group level.
To address this problem, we propose a new framework method and design (1 a topology extractor, which automatically identifies the topology group for each instance with explicit memory cells)
We empirically verify its effectiveness with both node-level and graph-level classification as the target tasks.
arXiv Detail & Related papers (2022-12-16T19:37:22Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Structure-Preserving Graph Representation Learning [43.43429108503634]
We propose a novel Structure-Preserving Graph Representation Learning (SPGRL) method to fully capture the structure information of graphs.
Specifically, to reduce the uncertainty and misinformation of the original graph, we construct a feature graph as a complementary view via k-Nearest Neighbor method.
Our method has quite superior performance on semi-supervised node classification task and excellent robustness under noise perturbation on graph structure or node features.
arXiv Detail & Related papers (2022-09-02T02:49:19Z) - Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification [69.45543438974963]
We find graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues.
The well-trained input features weaken the learning of graph topology, making it not generalized enough during the inference process.
We propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems.
arXiv Detail & Related papers (2022-08-01T16:15:31Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Self-supervised Graph-level Representation Learning with Local and
Global Structure [71.45196938842608]
We propose a unified framework called Local-instance and Global-semantic Learning (GraphLoG) for self-supervised whole-graph representation learning.
Besides preserving the local similarities, GraphLoG introduces the hierarchical prototypes to capture the global semantic clusters.
An efficient online expectation-maximization (EM) algorithm is further developed for learning the model.
arXiv Detail & Related papers (2021-06-08T05:25:38Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.