Optimized Vectorizing of Building Structures with Switch:
High-Efficiency Convolutional Channel-Switch Hybridization Strategy
- URL: http://arxiv.org/abs/2306.15035v2
- Date: Sat, 9 Mar 2024 14:41:50 GMT
- Title: Optimized Vectorizing of Building Structures with Switch:
High-Efficiency Convolutional Channel-Switch Hybridization Strategy
- Authors: Moule Lin, Weipeng Jing, Chao Li and Andr\'as Jung
- Abstract summary: We propose an advanced and adaptive shift architecture for building planar graph reconstruction.
The SwitchNN architecture incorporates a group-based parameter-sharing mechanism inspired by the convolutional neural network process.
Our results demonstrate the effectiveness of this innovative architecture in building planar graph reconstruction from 2D building images.
- Score: 5.563205385450147
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The building planar graph reconstruction, a.k.a. footprint reconstruction,
which lies in the domain of computer vision and geoinformatics, has been long
afflicted with the challenge of redundant parameters in conventional
convolutional models. Therefore, in this letter, we proposed an advanced and
adaptive shift architecture, namely the Switch operator, which incorporates
non-exponential growth parameters while retaining analogous functionalities to
integrate local feature spatial information, resembling a high-dimensional
convolution operation. The Switch operator, cross-channel operation,
architecture implements the XOR operation to alternately exchange adjacent or
diagonal features, and then blends alternating channels through a 1x1
convolution operation to consolidate information from different channels. The
SwitchNN architecture, on the other hand, incorporates a group-based
parameter-sharing mechanism inspired by the convolutional neural network
process and thereby significantly reducing the number of parameters. We
validated our proposed approach through experiments on the SpaceNet corpus, a
publicly available dataset annotated with 2,001 buildings across the cities of
Los Angeles, Las Vegas, and Paris. Our results demonstrate the effectiveness of
this innovative architecture in building planar graph reconstruction from 2D
building images.
Related papers
- GFLAN: Generative Functional Layouts [1.1458853556386797]
GFLAN is a generative framework that restructures floor plan synthesis through explicit factorization into topological planning and geometric realization.<n>Our approach departs from direct pixel-to-pixel or wall-tracing generation in favor of a principled two-stage decomposition.
arXiv Detail & Related papers (2025-12-18T07:52:47Z) - BHViT: Binarized Hybrid Vision Transformer [53.38894971164072]
Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN)
We propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations.
Our proposed algorithm achieves SOTA performance among binary ViT methods.
arXiv Detail & Related papers (2025-03-04T08:35:01Z) - Unifying Dimensions: A Linear Adaptive Approach to Lightweight Image Super-Resolution [6.857919231112562]
Window-based transformers have demonstrated outstanding performance in super-resolution tasks.
They exhibit higher computational complexity and inference latency than convolutional neural networks.
We construct a convolution-based Transformer framework named the linear adaptive mixer network (LAMNet)
arXiv Detail & Related papers (2024-09-26T07:24:09Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Vision Transformer with Convolutions Architecture Search [72.70461709267497]
We propose an architecture search method-Vision Transformer with Convolutions Architecture Search (VTCAS)
The high-performance backbone network searched by VTCAS introduces the desirable features of convolutional neural networks into the Transformer architecture.
It enhances the robustness of the neural network for object recognition, especially in the low illumination indoor scene.
arXiv Detail & Related papers (2022-03-20T02:59:51Z) - Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution [50.10987776141901]
Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
We introduce an effective hybrid architecture for super-resolution (SR) tasks, which leverages local features from CNNs and long-range dependencies captured by transformers.
Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.
arXiv Detail & Related papers (2022-03-15T06:52:25Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - X-volution: On the unification of convolution and self-attention [52.80459687846842]
We propose a multi-branch elementary module composed of both convolution and self-attention operation.
The proposed X-volution achieves highly competitive visual understanding improvements.
arXiv Detail & Related papers (2021-06-04T04:32:02Z) - Operation Embeddings for Neural Architecture Search [15.033712726016255]
We propose the replacement of fixed operator encoding with learnable representations in the optimization process.
Our method produces top-performing architectures that share similar operation and graph patterns.
arXiv Detail & Related papers (2021-05-11T09:17:10Z) - Structured Convolutions for Efficient Neural Network Design [65.36569572213027]
We tackle model efficiency by exploiting redundancy in the textitimplicit structure of the building blocks of convolutional neural networks.
We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers.
arXiv Detail & Related papers (2020-08-06T04:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.