Transforming Weather Data from Pixel to Latent Space
- URL: http://arxiv.org/abs/2503.06623v1
- Date: Sun, 09 Mar 2025 13:55:33 GMT
- Title: Transforming Weather Data from Pixel to Latent Space
- Authors: Sijie Zhao, Feng Liu, Xueliang Zhang, Hao Chen, Tao Han, Junchao Gong, Ran Tao, Pengfeng Xiao, Lei Bai, Wanli Ouyang,
- Abstract summary: We propose a novel Weather Latent Autoencoder that transforms weather data from pixel space to latent space.<n>We demonstrate its superior compression and reconstruction performance, enabling the creation of the ERA5-latent dataset.
- Score: 57.80389860291812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing impact of climate change and extreme weather events has spurred growing interest in deep learning for weather research. However, existing studies often rely on weather data in pixel space, which presents several challenges such as smooth outputs in model outputs, limited applicability to a single pressure-variable subset (PVS), and high data storage and computational costs. To address these challenges, we propose a novel Weather Latent Autoencoder (WLA) that transforms weather data from pixel space to latent space, enabling efficient weather task modeling. By decoupling weather reconstruction from downstream tasks, WLA improves the accuracy and sharpness of weather task model results. The incorporated Pressure-Variable Unified Module transforms multiple PVS into a unified representation, enhancing the adaptability of the model in multiple weather scenarios. Furthermore, weather tasks can be performed in a low-storage latent space of WLA rather than a high-storage pixel space, thus significantly reducing data storage and computational costs. Through extensive experimentation, we demonstrate its superior compression and reconstruction performance, enabling the creation of the ERA5-latent dataset with unified representations of multiple PVS from ERA5 data. The compressed full PVS in the ERA5-latent dataset reduces the original 244.34 TB of data to 0.43 TB. The downstream task further demonstrates that task models can apply to multiple PVS with low data costs in latent space and achieve superior performance compared to models in pixel space. Code, ERA5-latent data, and pre-trained models are available at https://anonymous.4open.science/r/Weather-Latent-Autoencoder-8467.
Related papers
- CirT: Global Subseasonal-to-Seasonal Forecasting with Geometry-inspired Transformer [47.65152457550307]
We propose the geometric-inspired Circular Transformer (CirT) to model the cyclic characteristic of the graticule.
Experiments on the Earth Reanalysis 5 (ERA5) reanalysis dataset demonstrate our model yields a significant improvement over the advanced data-driven models.
arXiv Detail & Related papers (2025-02-27T04:26:23Z) - Compressing high-resolution data through latent representation encoding for downscaling large-scale AI weather forecast model [10.634513279883913]
We propose a variational autoencoder framework tailored for compressing high-resolution datasets.
Our framework successfully reduced the storage size of 3 years of HRCLDAS data from 8.61 TB to just 204 GB, while preserving essential information.
arXiv Detail & Related papers (2024-10-10T05:38:03Z) - How far are today's time-series models from real-world weather forecasting applications? [22.68937280154092]
WEATHER-5K is a comprehensive collection of observational weather data that better reflects real-world scenarios.
It enables a better training of models and a more accurate assessment of the real-world forecasting capabilities of TSF models.
We provide researchers with a clear assessment of the gap between academic TSF models and real-world weather forecasting applications.
arXiv Detail & Related papers (2024-06-20T15:18:52Z) - CRA5: Extreme Compression of ERA5 for Portable Global Climate and Weather Research via an Efficient Variational Transformer [22.68937280154092]
We introduce an efficient neural, the Variational Autoencoder Transformer (VAEformer), for extreme compression of climate data.
VAEformer outperforms existing state-of-the-art compression methods in the context of climate data.
Experiments show that global weather forecasting models trained on the compact CRA5 dataset achieve forecasting accuracy comparable to the model trained on the original dataset.
arXiv Detail & Related papers (2024-05-06T11:30:55Z) - DIRESA, a distance-preserving nonlinear dimension reduction technique based on regularized autoencoders [0.0]
In meteorology, finding similar weather patterns or analogs in historical datasets can be useful for data assimilation, forecasting, and postprocessing.
In climate science, analogs in historical and climate projection data are used for attribution and impact studies.
We propose a dimension reduction technique based on autoencoder (AE) neural networks to compress those datasets and perform the search in an interpretable, compressed latent space.
arXiv Detail & Related papers (2024-04-28T20:54:57Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - A Novel Transformer Network with Shifted Window Cross-Attention for
Spatiotemporal Weather Forecasting [5.414308305392762]
We tackle the challenge of weather forecasting using a video transformer network.
Vision transformer architectures have been explored in various applications.
We propose the use of Video Swin-Transformer, coupled with a dedicated augmentation scheme.
arXiv Detail & Related papers (2022-08-02T05:04:53Z) - TransWeather: Transformer-based Restoration of Images Degraded by
Adverse Weather Conditions [77.20136060506906]
We propose TransWeather, a transformer-based end-to-end model with just a single encoder and a decoder.
TransWeather achieves significant improvements across multiple test datasets over both All-in-One network.
It is validated on real world test images and found to be more effective than previous methods.
arXiv Detail & Related papers (2021-11-29T18:57:09Z) - OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless
Compression [49.10945855716001]
We propose a novel setting that starts from a pretrained deep generative model and compresses the data batches while adapting the model with a dynamical system for only one epoch.
Experimental results show that vanilla OSOA can save significant time versus training bespoke models and space versus using one model for all targets.
arXiv Detail & Related papers (2021-11-02T15:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.