Earthformer: Exploring Space-Time Transformers for Earth System
Forecasting
- URL: http://arxiv.org/abs/2207.05833v1
- Date: Tue, 12 Jul 2022 20:52:26 GMT
- Title: Earthformer: Exploring Space-Time Transformers for Earth System
Forecasting
- Authors: Zhihan Gao, Xingjian Shi, Hao Wang, Yi Zhu, Yuyang Wang, Mu Li,
Dit-Yan Yeung
- Abstract summary: We propose Earthformer, a space-time Transformer for Earth system forecasting.
The Transformer is based on a generic, flexible and efficient space-time attention block, named Cuboid Attention.
Experiments on two real-world benchmarks about precipitation nowcasting and El Nino/Southerntemporaltion show Earthformer achieves state-of-the-art performance.
- Score: 27.60569643222878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventionally, Earth system (e.g., weather and climate) forecasting relies
on numerical simulation with complex physical models and are hence both
expensive in computation and demanding on domain expertise. With the explosive
growth of the spatiotemporal Earth observation data in the past decade,
data-driven models that apply Deep Learning (DL) are demonstrating impressive
potential for various Earth system forecasting tasks. The Transformer as an
emerging DL architecture, despite its broad success in other domains, has
limited adoption in this area. In this paper, we propose Earthformer, a
space-time Transformer for Earth system forecasting. Earthformer is based on a
generic, flexible and efficient space-time attention block, named Cuboid
Attention. The idea is to decompose the data into cuboids and apply
cuboid-level self-attention in parallel. These cuboids are further connected
with a collection of global vectors. We conduct experiments on the MovingMNIST
dataset and a newly proposed chaotic N-body MNIST dataset to verify the
effectiveness of cuboid attention and figure out the best design of
Earthformer. Experiments on two real-world benchmarks about precipitation
nowcasting and El Nino/Southern Oscillation (ENSO) forecasting show Earthformer
achieves state-of-the-art performance.
Related papers
- MambaDS: Near-Surface Meteorological Field Downscaling with Topography Constrained Selective State Space Modeling [68.69647625472464]
Downscaling, a crucial task in meteorological forecasting, enables the reconstruction of high-resolution meteorological states for target regions.
Previous downscaling methods lacked tailored designs for meteorology and encountered structural limitations.
We propose a novel model called MambaDS, which enhances the utilization of multivariable correlations and topography information.
arXiv Detail & Related papers (2024-08-20T13:45:49Z) - A Scalable Real-Time Data Assimilation Framework for Predicting Turbulent Atmosphere Dynamics [8.012940782999975]
We introduce a generic real-time data assimilation framework and demonstrate its end-to-end performance on the Frontier supercomputer.
This framework comprises two primary modules: an ensemble score filter (EnSF) and a vision transformer-based surrogate.
We demonstrate both the strong and weak scaling of our framework up to 1024 GPUs on the Exascale supercomputer, Frontier.
arXiv Detail & Related papers (2024-07-16T20:44:09Z) - Aurora: A Foundation Model of the Atmosphere [56.97266186291677]
We introduce Aurora, a large-scale foundation model of the atmosphere trained on over a million hours of diverse weather and climate data.
In under a minute, Aurora produces 5-day global air pollution predictions and 10-day high-resolution weather forecasts.
arXiv Detail & Related papers (2024-05-20T14:45:18Z) - Neural Plasticity-Inspired Multimodal Foundation Model for Earth Observation [48.66623377464203]
Our novel approach introduces the Dynamic One-For-All (DOFA) model, leveraging the concept of neural plasticity in brain science.
This dynamic hypernetwork, adjusting to different wavelengths, enables a single versatile Transformer jointly trained on data from five sensors to excel across 12 distinct Earth observation tasks.
arXiv Detail & Related papers (2024-03-22T17:11:47Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - Observation-Guided Meteorological Field Downscaling at Station Scale: A
Benchmark and a New Method [66.80344502790231]
We extend meteorological downscaling to arbitrary scattered station scales and establish a new benchmark and dataset.
Inspired by data assimilation techniques, we integrate observational data into the downscaling process, providing multi-scale observational priors.
Our proposed method outperforms other specially designed baseline models on multiple surface variables.
arXiv Detail & Related papers (2024-01-22T14:02:56Z) - SSIN: Self-Supervised Learning for Rainfall Spatial Interpolation [37.212272184144]
We propose a data-driven self-supervised learning framework for rainfall spatial analysis.
By mining latent spatial patterns from historical data, SpaFormer can learn informative embeddings for raw data and then adaptively model spatial correlations.
Our method outperforms the state-of-the-art solutions in experiments on two real-world raingauge datasets.
arXiv Detail & Related papers (2023-11-27T04:23:47Z) - Foundation Models for Generalist Geospatial Artificial Intelligence [3.7002058945990415]
This paper introduces a first-of-a-kind framework for the efficient pre-training and fine-tuning of foundational models on extensive data.
We have utilized this framework to create Prithvi, a transformer-based foundational model pre-trained on more than 1TB of multispectral satellite imagery.
arXiv Detail & Related papers (2023-10-28T10:19:55Z) - EarthPT: a time series foundation model for Earth Observation [0.0]
We introduce EarthPT -- an Earth Observation (EO) pretrained transformer.
We demonstrate that EarthPT is an effective forecaster that can accurately predict future pixel-level surface reflectances.
We also demonstrate that embeddings learnt by EarthPT hold semantically meaningful information.
arXiv Detail & Related papers (2023-09-13T18:00:00Z) - A machine learning and feature engineering approach for the prediction
of the uncontrolled re-entry of space objects [1.0205541448656992]
We present the development of a deep learning model for the re-entry prediction of uncontrolled objects in Low Earth Orbit (LEO)
The model is based on a modified version of the Sequence-to-Sequence architecture and is trained on the average altitude profile as derived from a set of Two-Line Element (TLE) data of over 400 bodies.
The novelty of the work consists in introducing in the deep learning model, alongside the average altitude, three new input features: a drag-like coefficient (B*), the average solar index, and the area-to-mass ratio of the object.
arXiv Detail & Related papers (2023-03-17T13:53:59Z) - Predictive World Models from Real-World Partial Observations [66.80340484148931]
We present a framework for learning a probabilistic predictive world model for real-world road environments.
While prior methods require complete states as ground truth for learning, we present a novel sequential training method to allow HVAEs to learn to predict complete states from partially observed states only.
arXiv Detail & Related papers (2023-01-12T02:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.