Translated Skip Connections -- Expanding the Receptive Fields of Fully
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2211.02111v1
- Date: Thu, 3 Nov 2022 19:30:40 GMT
- Title: Translated Skip Connections -- Expanding the Receptive Fields of Fully
Convolutional Neural Networks
- Authors: Joshua Bruton and Hairong Wang
- Abstract summary: We propose a neural network module, extending traditional skip connections, called the translated skip connection.
Translated skip connections geometrically increase the receptive field of an architecture with negligible impact on both the size of the parameter space and computational complexity.
- Score: 0.5584060970507506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The effective receptive field of a fully convolutional neural network is an
important consideration when designing an architecture, as it defines the
portion of the input visible to each convolutional kernel. We propose a neural
network module, extending traditional skip connections, called the translated
skip connection. Translated skip connections geometrically increase the
receptive field of an architecture with negligible impact on both the size of
the parameter space and computational complexity. By embedding translated skip
connections into a benchmark architecture, we demonstrate that our module
matches or outperforms four other approaches to expanding the effective
receptive fields of fully convolutional neural networks. We confirm this result
across five contemporary image segmentation datasets from disparate domains,
including the detection of COVID-19 infection, segmentation of aerial imagery,
common object segmentation, and segmentation for self-driving cars.
Related papers
- TransGUNet: Transformer Meets Graph-based Skip Connection for Medical Image Segmentation [1.2186950360560143]
We introduce an attentional cross-scale graph neural network (ACS-GNN) to enhance skip connection framework.
ACS-GNN converts cross-scale feature maps into a graph structure and captures complex anatomical structures through node attention.
Our framework, TransGUNet, comprises ACS-GNN and EFS-based spatial attentio to enhance domain generalizability across various modalities.
arXiv Detail & Related papers (2025-02-14T05:54:13Z) - URoadNet: Dual Sparse Attentive U-Net for Multiscale Road Network Extraction [35.39993205110938]
We introduce a computationally efficient and powerful framework for elegant road-aware segmentation.
Our method, called URoadNet, effectively encodes fine-grained local road connectivity and holistic global topological semantics.
Our approach represents a significant advancement in the field of road network extraction.
arXiv Detail & Related papers (2024-12-23T13:45:29Z) - Residual Graph Convolutional Network for Bird's-Eye-View Semantic
Segmentation [3.8073142980733]
We propose to incorporate a novel Residual Graph Convolutional (RGC) module in deep CNNs.
RGC module efficiently project the complete Bird's-Eye-View (BEV) information into graph space.
RGC network outperforms four state-of-the-art networks and its four variants in terms of IoU and mIoU.
arXiv Detail & Related papers (2023-12-07T05:04:41Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - M-FasterSeg: An Efficient Semantic Segmentation Network Based on Neural
Architecture Search [0.0]
This paper proposes an improved structure of a semantic segmentation network based on a deep learning network.
First, a neural network search method NAS (Neural Architecture Search) is used to find a semantic segmentation network with multiple resolution branches.
In the search process, combine the self-attention network structure module to adjust the searched neural network structure, and then combine the semantic segmentation network searched by different branches to form a fast semantic segmentation network structure.
arXiv Detail & Related papers (2021-12-15T06:46:55Z) - RSI-Net: Two-Stream Deep Neural Network Integrating GCN and Atrous CNN
for Semantic Segmentation of High-resolution Remote Sensing Images [3.468780866037609]
Two-stream deep neural network for semantic segmentation of remote sensing images (RSI-Net) is proposed in this paper.
Experiments are implemented on the Vaihingen, Potsdam and Gaofen RSI datasets.
Results demonstrate the superior performance of RSI-Net in terms of overall accuracy, F1 score and kappa coefficient when compared with six state-of-the-art RSI semantic segmentation methods.
arXiv Detail & Related papers (2021-09-19T15:57:20Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z) - Depthwise Non-local Module for Fast Salient Object Detection Using a
Single Thread [136.2224792151324]
We propose a new deep learning algorithm for fast salient object detection.
The proposed algorithm achieves competitive accuracy and high inference efficiency simultaneously with a single CPU thread.
arXiv Detail & Related papers (2020-01-22T15:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.