From Swath to Full-Disc: Advancing Precipitation Retrieval with Multimodal Knowledge Expansion
- URL: http://arxiv.org/abs/2506.07050v1
- Date: Sun, 08 Jun 2025 09:15:46 GMT
- Title: From Swath to Full-Disc: Advancing Precipitation Retrieval with Multimodal Knowledge Expansion
- Authors: Zheng Wang, Kai Ying, Bin Xu, Chunjiao Wang, Cong Bai,
- Abstract summary: PRE-Net aims to enable accurate, infrared-based full-disc precipitation retrievals beyond the scanning swath.<n> PRE-Net transfers knowledge from a multimodal data integration model to an infrared-based model within the scanning swath.<n>In the Full-Disc Adaptation stage, Self-MaskTune refines predictions across the full disc by balancing multimodal and full-disc infrared knowledge.
- Score: 11.443489040028645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate near-real-time precipitation retrieval has been enhanced by satellite-based technologies. However, infrared-based algorithms have low accuracy due to weak relations with surface precipitation, whereas passive microwave and radar-based methods are more accurate but limited in range. This challenge motivates the Precipitation Retrieval Expansion (PRE) task, which aims to enable accurate, infrared-based full-disc precipitation retrievals beyond the scanning swath. We introduce Multimodal Knowledge Expansion, a two-stage pipeline with the proposed PRE-Net model. In the Swath-Distilling stage, PRE-Net transfers knowledge from a multimodal data integration model to an infrared-based model within the scanning swath via Coordinated Masking and Wavelet Enhancement (CoMWE). In the Full-Disc Adaptation stage, Self-MaskTune refines predictions across the full disc by balancing multimodal and full-disc infrared knowledge. Experiments on the introduced PRE benchmark demonstrate that PRE-Net significantly advanced precipitation retrieval performance, outperforming leading products like PERSIANN-CCS, PDIR, and IMERG. The code will be available at https://github.com/Zjut-MultimediaPlus/PRE-Net.
Related papers
- RaCalNet: Radar Calibration Network for Sparse-Supervised Metric Depth Estimation [14.466573808593887]
RaCalNet is a novel framework that eliminates the need for dense supervision by using sparse LiDAR to supervise the learning of refined radar measurements.<n>RaCalNet produces depth maps with clear object contours and fine-grained textures, demonstrating superior visual quality compared to state-of-the-art dense-supervised methods.
arXiv Detail & Related papers (2025-06-18T15:35:16Z) - DISTA-Net: Dynamic Closely-Spaced Infrared Small Target Unmixing [55.366556355538954]
We propose the Dynamic Iterative Shrinkage Thresholding Network (DISTA-Net), which reconceptualizes traditional sparse reconstruction within a dynamic framework.<n>DISTA-Net is the first deep learning model designed specifically for the unmixing of closely-spaced infrared small targets.<n>We have established the first open-source ecosystem to foster further research in this field.
arXiv Detail & Related papers (2025-05-25T13:52:00Z) - TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion [54.46664104437454]
We propose TacoDepth, an efficient and accurate Radar-Camera depth estimation model with one-stage fusion.<n>Specifically, the graph-based Radar structure extractor and the pyramid-based Radar fusion module are designed.<n>Compared with the previous state-of-the-art approach, TacoDepth improves depth accuracy and processing speed by 12.8% and 91.8%.
arXiv Detail & Related papers (2025-04-16T05:25:04Z) - Resource-Efficient Beam Prediction in mmWave Communications with Multimodal Realistic Simulation Framework [57.994965436344195]
Beamforming is a key technology in millimeter-wave (mmWave) communications that improves signal transmission by optimizing directionality and intensity.<n> multimodal sensing-aided beam prediction has gained significant attention, using various sensing data to predict user locations or network conditions.<n>Despite its promising potential, the adoption of multimodal sensing-aided beam prediction is hindered by high computational complexity, high costs, and limited datasets.
arXiv Detail & Related papers (2025-04-07T15:38:25Z) - 10K is Enough: An Ultra-Lightweight Binarized Network for Infrared Small-Target Detection [48.074211420276605]
Binarized neural networks (BNNs) are distinguished by their exceptional efficiency in model compression.<n>We propose the Binarized Infrared Small-Target Detection Network (BiisNet)<n>BiisNet preserves the core operations of binarized convolutions while integrating full-precision features into the network's information flow.
arXiv Detail & Related papers (2025-03-04T14:25:51Z) - Towards Dense and Accurate Radar Perception Via Efficient Cross-Modal Diffusion Model [4.269423698485249]
This paper proposes a novel approach to dense and accurate mmWave radar point cloud construction via cross-modal learning.
Specifically, we introduce diffusion models, which possess state-of-the-art performance in generative modeling, to predict LiDAR-like point clouds from paired raw radar data.
We validate the proposed method through extensive benchmark comparisons and real-world experiments, demonstrating its superior performance and generalization ability.
arXiv Detail & Related papers (2024-03-13T12:20:20Z) - Transforming Observations of Ocean Temperature with a Deep Convolutional
Residual Regressive Neural Network [0.0]
Sea surface temperature (SST) is an essential climate variable that can be measured via ground truth, remote sensing, or hybrid model methodologies.
Here, we celebrate SST surveillance progress via the application of a few relevant technological advances from the late 20th and early 21st century.
We develop our existing water cycle observation framework, Flux to Flow (F2F), to fuse AMSR-E and MODIS into a higher resolution product.
Our neural network architecture is constrained to a deep convolutional residual regressive neural network.
arXiv Detail & Related papers (2023-06-16T17:35:11Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Machine learning for phase-resolved reconstruction of nonlinear ocean
wave surface elevations from sparse remote sensing data [37.69303106863453]
We propose a novel approach for phase-resolved wave surface reconstruction using neural networks.
Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids.
arXiv Detail & Related papers (2023-05-18T12:30:26Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Depth Completion via Inductive Fusion of Planar LIDAR and Monocular
Camera [27.978780155504467]
We introduce an inductive late-fusion block which better fuses different sensor modalities inspired by a probability model.
This block uses the dense context features to guide the depth prediction based on demonstrations by sparse depth features.
Our method shows promising results compared to previous approaches on both the benchmark datasets and simulated dataset.
arXiv Detail & Related papers (2020-09-03T18:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.