Deep Frequency Filtering for Domain Generalization
- URL: http://arxiv.org/abs/2203.12198v2
- Date: Sat, 25 Mar 2023 06:21:44 GMT
- Title: Deep Frequency Filtering for Domain Generalization
- Authors: Shiqi Lin, Zhizheng Zhang, Zhipeng Huang, Yan Lu, Cuiling Lan, Peng
Chu, Quanzeng You, Jiang Wang, Zicheng Liu, Amey Parulkar, Viraj Navkal,
Zhibo Chen
- Abstract summary: Deep Neural Networks (DNNs) have preferences for some frequency components in the learning process.
We propose Deep Frequency Filtering (DFF) for learning domain-generalizable features.
We show that applying our proposed DFF on a plain baseline outperforms the state-of-the-art methods on different domain generalization tasks.
- Score: 55.66498461438285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Improving the generalization ability of Deep Neural Networks (DNNs) is
critical for their practical uses, which has been a longstanding challenge.
Some theoretical studies have uncovered that DNNs have preferences for some
frequency components in the learning process and indicated that this may affect
the robustness of learned features. In this paper, we propose Deep Frequency
Filtering (DFF) for learning domain-generalizable features, which is the first
endeavour to explicitly modulate the frequency components of different transfer
difficulties across domains in the latent space during training. To achieve
this, we perform Fast Fourier Transform (FFT) for the feature maps at different
layers, then adopt a light-weight module to learn attention masks from the
frequency representations after FFT to enhance transferable components while
suppressing the components not conducive to generalization. Further, we
empirically compare the effectiveness of adopting different types of attention
designs for implementing DFF. Extensive experiments demonstrate the
effectiveness of our proposed DFF and show that applying our DFF on a plain
baseline outperforms the state-of-the-art methods on different domain
generalization tasks, including close-set classification and open-set
retrieval.
Related papers
- Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Adaptive Frequency Filters As Efficient Global Token Mixers [100.27957692579892]
We show that adaptive frequency filters can serve as efficient global token mixers.
We take AFF token mixers as primary neural operators to build a lightweight neural network, dubbed AFFNet.
arXiv Detail & Related papers (2023-07-26T07:42:28Z) - FAN-Net: Fourier-Based Adaptive Normalization For Cross-Domain Stroke
Lesion Segmentation [17.150527504559594]
We propose a novel FAN-Net, a U-Net-based segmentation network incorporated with a Fourier-based adaptive normalization (FAN)
The experimental results on the ATLAS dataset, which consists of MR images from 9 sites, show the superior performance of the proposed FAN-Net compared with baseline methods.
arXiv Detail & Related papers (2023-04-23T06:58:21Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Adaptive Frequency Learning in Two-branch Face Forgery Detection [66.91715092251258]
We propose Adaptively learn Frequency information in the two-branch Detection framework, dubbed AFD.
We liberate our network from the fixed frequency transforms, and achieve better performance with our data- and task-dependent transform layers.
arXiv Detail & Related papers (2022-03-27T14:25:52Z) - Learnable Multi-level Frequency Decomposition and Hierarchical Attention
Mechanism for Generalized Face Presentation Attack Detection [7.324459578044212]
Face presentation attack detection (PAD) is attracting a lot of attention and playing a key role in securing face recognition systems.
We propose a dual-stream convolution neural networks (CNNs) framework to deal with unseen scenarios.
We successfully prove the design of our proposed PAD solution in a step-wise ablation study.
arXiv Detail & Related papers (2021-09-16T13:06:43Z) - iffDetector: Inference-aware Feature Filtering for Object Detection [70.8678270164057]
We introduce a generic Inference-aware Feature Filtering (IFF) module that can easily be combined with modern detectors.
IFF performs closed-loop optimization by leveraging high-level semantics to enhance the convolutional features.
IFF can be fused with CNN-based object detectors in a plug-and-play manner with negligible computational cost overhead.
arXiv Detail & Related papers (2020-06-23T02:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.