FreConv: Frequency Branch-and-Integration Convolutional Networks
- URL: http://arxiv.org/abs/2304.04540v1
- Date: Mon, 10 Apr 2023 12:24:14 GMT
- Title: FreConv: Frequency Branch-and-Integration Convolutional Networks
- Authors: Zhaowen Li, Xu Zhao, Peigeng Ding, Zongxin Gao, Yuting Yang, Ming
Tang, Jinqiao Wang
- Abstract summary: We propose FreConv (frequency branch-and-integration convolution) to replace the vanilla convolution.
FreConv adopts a dual-branch architecture to extract and integrate high- and low-frequency information.
We show that FreConv-equipped networks consistently outperform state-of-the-art baselines.
- Score: 37.51672240863451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent researches indicate that utilizing the frequency information of input
data can enhance the performance of networks. However, the existing popular
convolutional structure is not designed specifically for utilizing the
frequency information contained in datasets. In this paper, we propose a novel
and effective module, named FreConv (frequency branch-and-integration
convolution), to replace the vanilla convolution. FreConv adopts a dual-branch
architecture to extract and integrate high- and low-frequency information. In
the high-frequency branch, a derivative-filter-like architecture is designed to
extract the high-frequency information while a light extractor is employed in
the low-frequency branch because the low-frequency information is usually
redundant. FreConv is able to exploit the frequency information of input data
in a more reasonable way to enhance feature representation ability and reduce
the memory and computational cost significantly. Without any bells and
whistles, experimental results on various tasks demonstrate that
FreConv-equipped networks consistently outperform state-of-the-art baselines.
Related papers
- Frequency-Integrated Transformer for Arbitrary-Scale Super-Resolution [8.303267303436613]
Methods based on implicit neural representation have demonstrated remarkable capabilities in arbitrary-scale super-resolution (ASSR) tasks.
We propose a novel network called Frequency-Integrated Transformer (FIT) to incorporate frequency information to enhance ASSR performance.
arXiv Detail & Related papers (2025-04-26T06:12:49Z) - Inversion-DeepONet: A Novel DeepONet-Based Network with Encoder-Decoder for Full Waveform Inversion [28.406887976413845]
We propose a novel deep operator network (DeepONet) architecture Inversion-DeepONet for full waveform inversion (FWI)
We utilize convolutional neural network (CNN) to extract the features from seismic data in branch net.
We confirm the superior performance on accuracy and generalization ability of our network, compared with existing data-driven FWI methods.
arXiv Detail & Related papers (2024-08-15T08:15:06Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - A High-Frequency Focused Network for Lightweight Single Image
Super-Resolution [16.264904771818507]
High-frequency detail is much more difficult to reconstruct than low-frequency information.
Most SISR models allocate equal computational resources for low-frequency and high-frequency information.
We propose a novel High-Frequency Focused Network (HFFN) that selectively enhance high-frequency information.
arXiv Detail & Related papers (2023-03-21T09:41:13Z) - FreGAN: Exploiting Frequency Components for Training GANs under Limited
Data [3.5459430566117893]
Training GANs under limited data often leads to discriminator overfitting and memorization issues.
This paper proposes FreGAN, which raises the model's frequency awareness and draws more attention to producing high-frequency signals.
In addition to exploiting both real and generated images' frequency information, we also involve the frequency signals of real images as a self-supervised constraint.
arXiv Detail & Related papers (2022-10-11T14:02:52Z) - Inception Transformer [151.939077819196]
Inception Transformer, or iFormer, learns comprehensive features with both high- and low-frequency information in visual data.
We benchmark the iFormer on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection and ADE20K segmentation.
arXiv Detail & Related papers (2022-05-25T17:59:54Z) - Adaptive Frequency Learning in Two-branch Face Forgery Detection [66.91715092251258]
We propose Adaptively learn Frequency information in the two-branch Detection framework, dubbed AFD.
We liberate our network from the fixed frequency transforms, and achieve better performance with our data- and task-dependent transform layers.
arXiv Detail & Related papers (2022-03-27T14:25:52Z) - Wavelet-Based Network For High Dynamic Range Imaging [64.66969585951207]
Existing methods, such as optical flow based and end-to-end deep learning based solutions, are error-prone either in detail restoration or ghosting artifacts removal.
In this work, we propose a novel frequency-guided end-to-end deep neural network (FNet) to conduct HDR fusion in the frequency domain, and Wavelet Transform (DWT) is used to decompose inputs into different frequency bands.
The low-frequency signals are used to avoid specific ghosting artifacts, while the high-frequency signals are used for preserving details.
arXiv Detail & Related papers (2021-08-03T12:26:33Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.