Improving Variational Autoencoder using Random Fourier Transformation: An Aviation Safety Anomaly Detection Case-Study
- URL: http://arxiv.org/abs/2601.01016v1
- Date: Sat, 03 Jan 2026 00:56:14 GMT
- Title: Improving Variational Autoencoder using Random Fourier Transformation: An Aviation Safety Anomaly Detection Case-Study
- Authors: Ata Akbari Asanjan, Milad Memarzadeh, Bryan Matthews, Nikunj Oza,
- Abstract summary: We focus on the training process and inference improvements of deep neural networks (DNNs) using Random Fourier Transformation (RFT)<n>We show that models with RFT turn to learn low frequency and high frequency at the same time, whereas conventional DNNs start from low frequency and gradually learn (if successful) high-frequency features.<n>We showcase our findings with two low-dimensional synthetic datasets for data representation, and an aviation safety dataset, called Dashlink, for reconstruction-based anomaly detection.
- Score: 0.11666234644810891
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we focus on the training process and inference improvements of deep neural networks (DNNs), specifically Autoencoders (AEs) and Variational Autoencoders (VAEs), using Random Fourier Transformation (RFT). We further explore the role of RFT in model training behavior using Frequency Principle (F-Principle) analysis and show that models with RFT turn to learn low frequency and high frequency at the same time, whereas conventional DNNs start from low frequency and gradually learn (if successful) high-frequency features. We focus on reconstruction-based anomaly detection using autoencoder and variational autoencoder and investigate the RFT's role. We also introduced a trainable variant of RFT that uses the existing computation graph to train the expansion of RFT instead of it being random. We showcase our findings with two low-dimensional synthetic datasets for data representation, and an aviation safety dataset, called Dashlink, for high-dimensional reconstruction-based anomaly detection. The results indicate the superiority of models with Fourier transformation compared to the conventional counterpart and remain inconclusive regarding the benefits of using trainable Fourier transformation in contrast to the Random variant.
Related papers
- Self-Supervised Learning via Flow-Guided Neural Operator on Time-Series Data [57.85958428020496]
Flow-Guided Neural Operator (FGNO) is a novel framework combining operator learning with flow matching for SSL training.<n>FGNO learns mappings in functional spaces by using Short-Time Fourier Transform to unify different time resolutions.<n>Unlike prior generative SSL methods that use noisy inputs during inference, we propose using clean inputs for representation extraction while learning representations with noise.
arXiv Detail & Related papers (2026-02-12T18:54:57Z) - Iterative Training of Physics-Informed Neural Networks with Fourier-enhanced Features [7.1865646765394215]
Spectral bias, the tendency of neural networks to learn low-frequency features first, is a well-known issue.<n>We propose IFeF-PINN, an algorithm for iterative training of PINNs with Fourier-enhanced features.
arXiv Detail & Related papers (2025-10-22T09:17:37Z) - Quantum Meets SAR: A Novel Range-Doppler Algorithm for Next-Gen Earth Observation [0.0]
This paper presents a Quantum Range Doppler Algorithm (QRDA) to accelerate processing compared to the classical FFT.<n>It introduces a quantum implementation of the Range Cell Migration Correction (RCMC) in the Fourier domain, a critical step in the RDA pipeline.<n>The performance of the quantum RCMC is evaluated and compared against its classical counterpart, demonstrating the potential of quantum computing in advanced SAR imaging.
arXiv Detail & Related papers (2025-04-02T15:40:12Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Fourier-DeepONet: Fourier-enhanced deep operator networks for full
waveform inversion with improved accuracy, generalizability, and robustness [4.186792090302649]
Full waveform inversion (FWI) infers the structure information from waveform data by solving a non- optimization problem.
Here, we develop a neural network (Fourier-DeepONet) for FWI with the generalization of sources, including the frequencies and locations of sources.
Our experiments demonstrate that Fourier-DeepONet obtains more accurate predictions of subsurface structures in a wide range of source parameters.
arXiv Detail & Related papers (2023-05-26T22:17:28Z) - Fourier Sensitivity and Regularization of Computer Vision Models [11.79852671537969]
We study the frequency sensitivity characteristics of deep neural networks using a principled approach.
We find that computer vision models are consistently sensitive to particular frequencies dependent on the dataset, training method and architecture.
arXiv Detail & Related papers (2023-01-31T10:05:35Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Fourier Disentangled Space-Time Attention for Aerial Video Recognition [54.80846279175762]
We present an algorithm, Fourier Activity Recognition (FAR), for UAV video activity recognition.
Our formulation uses a novel Fourier object disentanglement method to innately separate out the human agent from the background.
We have evaluated our approach on multiple UAV datasets including UAV Human RGB, UAV Human Night, Drone Action, and NEC Drone.
arXiv Detail & Related papers (2022-03-21T01:24:53Z) - Simpler is better: spectral regularization and up-sampling techniques
for variational autoencoders [1.2234742322758418]
characterization of the spectral behavior of generative models based on neural networks remains an open issue.
Recent research has focused heavily on generative adversarial networks and the high-frequency discrepancies between real and generated images.
We propose a simple 2D Fourier transform-based spectral regularization loss for the Variational Autoencoders (VAEs)
arXiv Detail & Related papers (2022-01-19T11:49:57Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Random Feature Attention [69.4671822971207]
We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function.
RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism.
Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines.
arXiv Detail & Related papers (2021-03-03T02:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.