Local Attention Transformers for High-Detail Optical Flow Upsampling
- URL: http://arxiv.org/abs/2412.06439v1
- Date: Mon, 09 Dec 2024 12:30:59 GMT
- Title: Local Attention Transformers for High-Detail Optical Flow Upsampling
- Authors: Alexander Gielisse, Nergis Tömen, Jan van Gemert,
- Abstract summary: We show and discuss several issues and limitations of this currently widely adopted convex upsampling approach.
We propose to decouple the weights for the final convex upsampler, making it easier to find the correct convex combination.
We increase the convex mask size by using an attention-based alternative convex upsampler.
- Score: 52.68929881957646
- License:
- Abstract: Most recent works on optical flow use convex upsampling as the last step to obtain high-resolution flow. In this work, we show and discuss several issues and limitations of this currently widely adopted convex upsampling approach. We propose a series of changes, in an attempt to resolve current issues. First, we propose to decouple the weights for the final convex upsampler, making it easier to find the correct convex combination. For the same reason, we also provide extra contextual features to the convex upsampler. Then, we increase the convex mask size by using an attention-based alternative convex upsampler; Transformers for Convex Upsampling. This upsampler is based on the observation that convex upsampling can be reformulated as attention, and we propose to use local attention masks as a drop-in replacement for convex masks to increase the mask size. We provide empirical evidence that a larger mask size increases the likelihood of the existence of the convex combination. Lastly, we propose an alternative training scheme to remove bilinear interpolation artifacts from the model output. Our proposed ideas could theoretically be applied to almost every current state-of-the-art optical flow architecture. On the FlyingChairs + FlyingThings3D training setting we reduce the Sintel Clean training end-point-error of RAFT from 1.42 to 1.26, GMA from 1.31 to 1.18, and that of FlowFormer from 0.94 to 0.90, by solely adapting the convex upsampler.
Related papers
- Adaptive Selection of Sampling-Reconstruction in Fourier Compressed Sensing [13.775902519100075]
Compressed sensing (CS) has emerged to overcome the inefficiency of Nyquist sampling.
Deep learning-based reconstruction has been a promising alternative to optimization-based reconstruction.
arXiv Detail & Related papers (2024-09-18T06:51:29Z) - Mip-Splatting: Alias-free 3D Gaussian Splatting [52.366815964832426]
3D Gaussian Splatting has demonstrated impressive novel view synthesis results, reaching high fidelity and efficiency.
Strong artifacts can be observed when changing the sampling rate, eg, by changing focal length or camera distance.
We find that the source for this phenomenon can be attributed to the lack of 3D frequency constraints and the usage of a 2D dilation filter.
arXiv Detail & Related papers (2023-11-27T13:03:09Z) - Grad-PU: Arbitrary-Scale Point Cloud Upsampling via Gradient Descent
with Learned Distance Functions [77.32043242988738]
We propose a new framework for accurate point cloud upsampling that supports arbitrary upsampling rates.
Our method first interpolates the low-res point cloud according to a given upsampling rate.
arXiv Detail & Related papers (2023-04-24T06:36:35Z) - FInC Flow: Fast and Invertible $k \times k$ Convolutions for Normalizing
Flows [2.156373334386171]
Invertible convolutions have been an essential element for building expressive normalizing flow-based generative models.
We propose a $k times k$ convolutional layer and Deep Normalizing Flow architecture.
arXiv Detail & Related papers (2023-01-23T04:31:03Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - BIMS-PU: Bi-Directional and Multi-Scale Point Cloud Upsampling [60.257912103351394]
We develop a new point cloud upsampling pipeline called BIMS-PU.
We decompose the up/downsampling procedure into several up/downsampling sub-steps by breaking the target sampling factor into smaller factors.
We show that our method achieves superior results to state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-25T13:13:37Z) - Anti-aliasing Deep Image Classifiers using Novel Depth Adaptive Blurring
and Activation Function [7.888131635057012]
Deep convolutional networks are vulnerable to image translation or shift.
The textbook solution is low-pass filtering before down-sampling.
We show that Depth Adaptive Blurring is more effective, as opposed to monotonic blurring.
arXiv Detail & Related papers (2021-10-03T01:00:52Z) - PC2-PU: Patch Correlation and Position Correction for Effective Point
Cloud Upsampling [12.070762117164092]
Point cloud upsampling is to densify a sparse point set acquired from 3D sensors.
Existing methods perform upsampling on a single patch, ignoring the coherence and relation of the entire surface.
We present a novel method for more effective point cloud upsampling, achieving a more robust and improved performance.
arXiv Detail & Related papers (2021-09-20T07:40:20Z) - Normalized Convolution Upsampling for Refined Optical Flow Estimation [23.652615797842085]
Normalized Convolution UPsampler (NCUP) is an efficient joint upsampling approach to produce the full-resolution flow during the training of optical flow CNNs.
Our proposed approach formulates the upsampling task as a sparse problem and employs the normalized convolutional neural networks to solve it.
We achieve state-of-the-art results on Sintel benchmark with 6% error reduction, and on-par on the KITTI dataset, while having 7.5% fewer parameters.
arXiv Detail & Related papers (2021-02-13T18:34:03Z) - Learning Affinity-Aware Upsampling for Deep Image Matting [83.02806488958399]
We show that learning affinity in upsampling provides an effective and efficient approach to exploit pairwise interactions in deep networks.
In particular, results on the Composition-1k matting dataset show that A2U achieves a 14% relative improvement in the SAD metric against a strong baseline.
Compared with the state-of-the-art matting network, we achieve 8% higher performance with only 40% model complexity.
arXiv Detail & Related papers (2020-11-29T05:09:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.