SalFBNet: Learning Pseudo-Saliency Distribution via Feedback
Convolutional Networks
- URL: http://arxiv.org/abs/2112.03731v1
- Date: Tue, 7 Dec 2021 14:39:45 GMT
- Title: SalFBNet: Learning Pseudo-Saliency Distribution via Feedback
Convolutional Networks
- Authors: Guanqun Ding, Nevrez Imamouglu, Ali Caglayan, Masahiro Murakawa,
Ryosuke Nakamura
- Abstract summary: We propose a feedback-recursive convolutional framework (SalFBNet) for saliency detection.
We create a large-scale Pseudo-Saliency dataset to alleviate the problem of data deficiency in saliency detection.
- Score: 8.195696498474579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feed-forward only convolutional neural networks (CNNs) may ignore intrinsic
relationships and potential benefits of feedback connections in vision tasks
such as saliency detection, despite their significant representation
capabilities. In this work, we propose a feedback-recursive convolutional
framework (SalFBNet) for saliency detection. The proposed feedback model can
learn abundant contextual representations by bridging a recursive pathway from
higher-level feature blocks to low-level layer. Moreover, we create a
large-scale Pseudo-Saliency dataset to alleviate the problem of data deficiency
in saliency detection. We first use the proposed feedback model to learn
saliency distribution from pseudo-ground-truth. Afterwards, we fine-tune the
feedback model on existing eye-fixation datasets. Furthermore, we present a
novel Selective Fixation and Non-Fixation Error (sFNE) loss to make proposed
feedback model better learn distinguishable eye-fixation-based features.
Extensive experimental results show that our SalFBNet with fewer parameters
achieves competitive results on the public saliency detection benchmarks, which
demonstrate the effectiveness of proposed feedback model and Pseudo-Saliency
data. Source codes and Pseudo-Saliency dataset can be found at
https://github.com/gqding/SalFBNet
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - FFEINR: Flow Feature-Enhanced Implicit Neural Representation for
Spatio-temporal Super-Resolution [4.577685231084759]
This paper proposes a Feature-Enhanced Neural Implicit Representation (FFEINR) for super-resolution of flow field data.
It can take full advantage of the implicit neural representation in terms of model structure and sampling resolution.
The training process of FFEINR is facilitated by introducing feature enhancements for the input layer.
arXiv Detail & Related papers (2023-08-24T02:28:18Z) - FRGNN: Mitigating the Impact of Distribution Shift on Graph Neural
Networks via Test-Time Feature Reconstruction [13.21683198528012]
A distribution shift can adversely affect the test performance of Graph Neural Networks (GNNs)
We propose FR-GNN, a general framework for GNNs to conduct feature reconstruction.
Notably, the reconstructed node features can be directly utilized for testing the well-trained model.
arXiv Detail & Related papers (2023-08-18T02:34:37Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - How effective are Graph Neural Networks in Fraud Detection for Network
Data? [0.0]
Graph-based Neural Networks (GNNs) are recent models created for learning representations of nodes (and graphs)
Financial fraud stands out for its socioeconomic relevance and for presenting particular challenges, such as the extreme imbalance between the positive (fraud) and negative (legitimate transactions) classes.
We conduct experiments to evaluate existing techniques for detecting network fraud, considering the two previous challenges.
arXiv Detail & Related papers (2021-05-30T15:17:13Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Relation-aware Graph Attention Model With Adaptive Self-adversarial
Training [29.240686573485718]
This paper describes an end-to-end solution for the relationship prediction task in heterogeneous, multi-relational graphs.
We particularly address two building blocks in the pipeline, namely heterogeneous graph representation learning and negative sampling.
We introduce a parameter-free negative sampling technique -- adaptive self-adversarial (ASA) negative sampling.
arXiv Detail & Related papers (2021-02-14T16:11:56Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.