An Entropy-guided Reinforced Partial Convolutional Network for Zero-Shot
Learning
- URL: http://arxiv.org/abs/2111.02139v1
- Date: Wed, 3 Nov 2021 11:13:13 GMT
- Title: An Entropy-guided Reinforced Partial Convolutional Network for Zero-Shot
Learning
- Authors: Yun Li, Zhe Liu, Lina Yao, Xianzhi Wang, Julian McAuley, Xiaojun Chang
- Abstract summary: We propose a novel Entropy-guided Reinforced Partial Convolutional Network (ERPCNet)
ERPCNet extracts and aggregates localities based on semantic relevance and visual correlations without human-annotated regions.
It not only discovers global-cooperative localities dynamically but also converges faster for policy gradient optimization.
- Score: 77.72330187258498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-Shot Learning (ZSL) aims to transfer learned knowledge from observed
classes to unseen classes via semantic correlations. A promising strategy is to
learn a global-local representation that incorporates global information with
extra localities (i.e., small parts/regions of inputs). However, existing
methods discover localities based on explicit features without digging into the
inherent properties and relationships among regions. In this work, we propose a
novel Entropy-guided Reinforced Partial Convolutional Network (ERPCNet), which
extracts and aggregates localities progressively based on semantic relevance
and visual correlations without human-annotated regions. ERPCNet uses
reinforced partial convolution and entropy guidance; it not only discovers
global-cooperative localities dynamically but also converges faster for policy
gradient optimization. We conduct extensive experiments to demonstrate
ERPCNet's performance through comparisons with state-of-the-art methods under
ZSL and Generalized Zero-Shot Learning (GZSL) settings on four benchmark
datasets. We also show ERPCNet is time efficient and explainable through
visualization analysis.
Related papers
- Improved Generalization Bounds for Communication Efficient Federated Learning [4.3707341422218215]
This paper focuses on reducing the communication cost of federated learning by exploring generalization bounds and representation learning.
We design a novel Federated Learning with Adaptive Local Steps (FedALS) algorithm based on our generalization bound and representation learning analysis.
arXiv Detail & Related papers (2024-04-17T21:17:48Z) - Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Locally Supervised Learning with Periodic Global Guidance [19.41730292017383]
We propose Periodically Guided local Learning (PGL) to reinstate the global objective repetitively into the local-loss based training of neural networks.
We show that a simple periodic guidance scheme begets significant performance gains while having a low memory footprint.
arXiv Detail & Related papers (2022-08-01T13:06:26Z) - PRA-Net: Point Relation-Aware Network for 3D Point Cloud Analysis [56.91758845045371]
We propose a novel framework named Point Relation-Aware Network (PRA-Net)
It is composed of an Intra-region Structure Learning (ISL) module and an Inter-region Relation Learning (IRL) module.
Experiments on several 3D benchmarks covering shape classification, keypoint estimation, and part segmentation have verified the effectiveness and the ability of PRA-Net.
arXiv Detail & Related papers (2021-12-09T13:24:43Z) - Rethink, Revisit, Revise: A Spiral Reinforced Self-Revised Network for
Zero-Shot Learning [35.75113836637253]
We propose a form of spiral learning which revisits visual representations based on a sequence of attribute groups.
Spiral learning aims to learn generalized local correlations, enabling models to gradually enhance global learning.
Our framework outperforms state-of-the-art algorithms on four benchmark datasets in both zero-shot and generalized zero-shot settings.
arXiv Detail & Related papers (2021-12-01T10:51:57Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - An Integrated Attribute Guided Dense Attention Model for Fine-Grained
Generalized Zero-Shot Learning [7.22073260315824]
Embedding learning (EL) and feature synthesizing (FS) are two of the popular categories of fine-grained GZSL methods.
We propose to explore global and direct attribute-supervised local visual features for both EL and FS categories.
We demonstrate that our proposed method outperforms contemporary methods on benchmark datasets.
arXiv Detail & Related papers (2020-12-31T21:38:46Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.