FDNet: A Deep Learning Approach with Two Parallel Cross Encoding
Pathways for Precipitation Nowcasting
- URL: http://arxiv.org/abs/2105.02585v1
- Date: Thu, 6 May 2021 11:18:24 GMT
- Title: FDNet: A Deep Learning Approach with Two Parallel Cross Encoding
Pathways for Precipitation Nowcasting
- Authors: Bi-Ying Yan and Chao Yang and Feng Chen and Kohei Takeda and Changjun
Wang
- Abstract summary: We introduce Flow-Deformation Network (FDNet), a neural network that models flow and deformation in two parallel cross pathways.
We evaluate the proposed network architecture on two real-world radar echo datasets.
- Score: 7.0521806281607615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the goal of predicting the future rainfall intensity in a local region
over a relatively short period time, precipitation nowcasting has been a
long-time scientific challenge with great social and economic impact. The radar
echo extrapolation approaches for precipitation nowcasting take radar echo
images as input, aiming to generate future radar echo images by learning from
the historical images. To effectively handle complex and high non-stationary
evolution of radar echoes, we propose to decompose the movement into optical
flow field motion and morphologic deformation. Following this idea, we
introduce Flow-Deformation Network (FDNet), a neural network that models flow
and deformation in two parallel cross pathways. The flow encoder captures the
optical flow field motion between consecutive images and the deformation
encoder distinguishes the change of shape from the translational motion of
radar echoes. We evaluate the proposed network architecture on two real-world
radar echo datasets. Our model achieves state-of-the-art prediction results
compared with recent approaches. To the best of our knowledge, this is the
first network architecture with flow and deformation separation to model the
evolution of radar echoes for precipitation nowcasting. We believe that the
general idea of this work could not only inspire much more effective approaches
but also be applied to other similar spatiotemporal prediction tasks
Related papers
- TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion [54.46664104437454]
We propose TacoDepth, an efficient and accurate Radar-Camera depth estimation model with one-stage fusion.
Specifically, the graph-based Radar structure extractor and the pyramid-based Radar fusion module are designed.
Compared with the previous state-of-the-art approach, TacoDepth improves depth accuracy and processing speed by 12.8% and 91.8%.
arXiv Detail & Related papers (2025-04-16T05:25:04Z) - SpINR: Neural Volumetric Reconstruction for FMCW Radars [0.15193212081459279]
We introduce SpINR, a novel framework for volumetric reconstruction using Frequency-Modulated Continuous-Wave (FMCW) radar data.
We demonstrate that SpINR significantly outperforms classical backprojection methods and existing learning-based approaches.
arXiv Detail & Related papers (2025-03-30T04:44:57Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - A Multi-Graph Convolutional Neural Network Model for Short-Term Prediction of Turning Movements at Signalized Intersections [0.6215404942415159]
This study introduces a novel deep learning architecture, referred to as the multigraph convolution neural network (MGCNN) for turning movement prediction at intersections.
The proposed architecture combines a multigraph structure, built to model temporal variations in traffic data, with a spectral convolution operation to support modeling the spatial variations in traffic data over the graphs.
The model's ability to perform short-term predictions over 1, 2, 3, 4, and 5 minutes into the future was evaluated against four baseline state-of-the-art models.
arXiv Detail & Related papers (2024-06-02T05:41:25Z) - Forward Flow for Novel View Synthesis of Dynamic Scenes [97.97012116793964]
We propose a neural radiance field (NeRF) approach for novel view synthesis of dynamic scenes using forward warping.
Our method outperforms existing methods in both novel view rendering and motion modeling.
arXiv Detail & Related papers (2023-09-29T16:51:06Z) - A Novel Deep Neural Network for Trajectory Prediction in Automated
Vehicles Using Velocity Vector Field [12.067838086415833]
This paper proposes a novel technique for trajectory prediction that combines a data-driven learning-based method with a velocity vector field (VVF) generated from a nature-inspired concept.
The accuracy remains consistent with decreasing observation windows which alleviates the requirement of a long history of past observations for accurate trajectory prediction.
arXiv Detail & Related papers (2023-09-19T22:14:52Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Machine learning for phase-resolved reconstruction of nonlinear ocean
wave surface elevations from sparse remote sensing data [37.69303106863453]
We propose a novel approach for phase-resolved wave surface reconstruction using neural networks.
Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids.
arXiv Detail & Related papers (2023-05-18T12:30:26Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Waveform Selection for Radar Tracking in Target Channels With Memory via
Universal Learning [14.796960833031724]
Adapting the radar's waveform using partial information about the state of the scene has been shown to provide performance benefits in many practical scenarios.
This work examines a radar system which builds a compressed model of the radar-environment interface in the form of a context-tree.
The proposed approach is tested in a simulation study, and is shown to provide tracking performance improvements over two state-of-the-art waveform selection schemes.
arXiv Detail & Related papers (2021-08-02T21:27:56Z) - Incorporating Kinematic Wave Theory into a Deep Learning Method for
High-Resolution Traffic Speed Estimation [3.0969191504482243]
We propose a kinematic wave based Deep Convolutional Neural Network (Deep CNN) to estimate high resolution traffic speed dynamics from sparse probe vehicle trajectories.
We introduce two key approaches that allow us to incorporate kinematic wave theory principles to improve the robustness of existing learning-based estimation methods.
arXiv Detail & Related papers (2021-02-04T21:51:25Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.