Fast Fourier Intrinsic Network
- URL: http://arxiv.org/abs/2011.04612v1
- Date: Mon, 9 Nov 2020 18:14:39 GMT
- Title: Fast Fourier Intrinsic Network
- Authors: Yanlin Qian and Miaojing Shi and Joni-Kristian K\"am\"ar\"ainen and
Jiri Matas
- Abstract summary: We propose the Fast Fourier Intrinsic Network, FFI-Net, that operates in the spectral domain.
Weights in FFI-Net are optimized in the spectral domain, allowing faster convergence to a lower error.
It achieves state-of-the-art performance on MPI-Sintel, MIT Intrinsic, and IIW datasets.
- Score: 41.95712986029093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of decomposing an image into albedo and shading. We
propose the Fast Fourier Intrinsic Network, FFI-Net in short, that operates in
the spectral domain, splitting the input into several spectral bands. Weights
in FFI-Net are optimized in the spectral domain, allowing faster convergence to
a lower error. FFI-Net is lightweight and does not need auxiliary networks for
training. The network is trained end-to-end with a novel spectral loss which
measures the global distance between the network prediction and corresponding
ground truth. FFI-Net achieves state-of-the-art performance on MPI-Sintel, MIT
Intrinsic, and IIW datasets.
Related papers
- Spectral Informed Neural Network: An Efficient and Low-Memory PINN [3.8534287291074354]
We propose a spectral-based neural network that substitutes the differential operator with a multiplication.
Compared to the PINNs, our approach requires lower memory and shorter training time.
We provide two strategies to train networks by their spectral information.
arXiv Detail & Related papers (2024-08-29T10:21:00Z) - Multiscale Low-Frequency Memory Network for Improved Feature Extraction
in Convolutional Neural Networks [13.815116154370834]
We introduce a novel framework, the Multiscale Low-Frequency Memory (MLFM) Network.
The MLFM efficiently preserves low-frequency information, enhancing performance in targeted computer vision tasks.
Our work builds upon the existing CNN foundations and paves the way for future advancements in computer vision.
arXiv Detail & Related papers (2024-03-13T00:48:41Z) - TFDMNet: A Novel Network Structure Combines the Time Domain and
Frequency Domain Features [34.91485245048524]
This paper proposes a novel Element-wise Multiplication Layer (EML) to replace convolution layers.
We also introduce a Weight Fixation mechanism to alleviate the problem of over-fitting.
Experimental results imply that TFDMNet achieves good performance on MNIST, CIFAR-10 and ImageNet databases.
arXiv Detail & Related papers (2024-01-29T08:18:21Z) - Graph Neural Networks for Power Allocation in Wireless Networks with
Full Duplex Nodes [10.150768420975155]
Due to mutual interference between users, power allocation problems in wireless networks are often non-trivial.
Graph Graph neural networks (GNNs) have recently emerged as a promising approach tackling these problems and an approach exploits underlying topology of wireless networks.
arXiv Detail & Related papers (2023-03-27T10:59:09Z) - Network Calculus with Flow Prolongation -- A Feedforward FIFO Analysis
enabled by ML [73.11023209243326]
Flow Prolongation (FP) has been shown to improve delay bound accuracy significantly.
We introduce DeepFP, an approach to make FP scale by predicting prolongations using machine learning.
DeepFP reduces delay bounds by 12.1% on average at negligible additional computational cost.
arXiv Detail & Related papers (2022-02-07T08:46:47Z) - Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network
Pruning [50.232218214751455]
optimal network pruning is a non-trivial task which mathematically is an NP-hard problem.
In this paper, we investigate the Magnitude-Based Pruning (MBP) scheme and analyze it from a novel perspective.
We also propose a novel two-stage pruning approach, where one stage is to obtain the topological structure of the pruned network and the other stage is to retrain the pruned network to recover the capacity.
arXiv Detail & Related papers (2022-01-30T03:42:36Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - Towards Theoretical Understanding of Flexible Transmitter Networks via
Approximation and Local Minima [74.30120779041428]
We study the theoretical properties of one-hidden-layer FTNet from the perspectives of approximation and local minima.
Our results indicate that FTNet can efficiently express target functions and has no concern about local minima.
arXiv Detail & Related papers (2021-11-11T02:41:23Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Acceleration of Convolutional Neural Network Using FFT-Based Split
Convolutions [11.031841470875571]
Convolutional neural networks (CNNs) have a large number of variables and hence suffer from a complexity problem for their implementation.
Recent studies on Fast Fourier Transform (FFT) based CNN aiming at simplifying the computations required for FFT.
In this paper, a new method for CNN processing in the FFT domain is proposed, which is based on input splitting.
arXiv Detail & Related papers (2020-03-27T20:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.