Crop Classification under Varying Cloud Cover with Neural Ordinary
Differential Equations
- URL: http://arxiv.org/abs/2012.02542v1
- Date: Fri, 4 Dec 2020 11:56:50 GMT
- Title: Crop Classification under Varying Cloud Cover with Neural Ordinary
Differential Equations
- Authors: Nando Metzger, Mehmet Ozgur Turkoglu, Stefano D'Aronco, Jan Dirk
Wegner, Konrad Schindler
- Abstract summary: State-of-the-art methods for crop classification rely on techniques that implicitly assume regular temporal spacing between observations.
We propose to use neural ordinary differential equations (NODEs) in combination with recurrent neural networks (RNNs) to classify crop types in irregularly spaced image sequences.
- Score: 23.93148719731374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical satellite sensors cannot see the Earth's surface through clouds.
Despite the periodic revisit cycle, image sequences acquired by Earth
observation satellites are therefore irregularly sampled in time.
State-of-the-art methods for crop classification (and other time series
analysis tasks) rely on techniques that implicitly assume regular temporal
spacing between observations, such as recurrent neural networks (RNNs). We
propose to use neural ordinary differential equations (NODEs) in combination
with RNNs to classify crop types in irregularly spaced image sequences. The
resulting ODE-RNN models consist of two steps: an update step, where a
recurrent unit assimilates new input data into the model's hidden state; and a
prediction step, in which NODE propagates the hidden state until the next
observation arrives. The prediction step is based on a continuous
representation of the latent dynamics, which has several advantages. At the
conceptual level, it is a more natural way to describe the mechanisms that
govern the phenological cycle. From a practical point of view, it makes it
possible to sample the system state at arbitrary points in time, such that one
can integrate observations whenever they are available, and extrapolate beyond
the last observation. Our experiments show that ODE-RNN indeed improves
classification accuracy over common baselines such as LSTM, GRU, and temporal
convolution. The gains are most prominent in the challenging scenario where
only few observations are available (i.e., frequent cloud cover). Moreover, we
show that the ability to extrapolate translates to better classification
performance early in the season, which is important for forecasting.
Related papers
- Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Continuous Depth Recurrent Neural Differential Equations [0.0]
We propose continuous depth recurrent neural differential equations (CDR-NDE) to generalize RNN models.
CDR-NDE considers two separate differential equations over each of these dimensions and models the evolution in the temporal and depth directions.
We also propose the CDR-NDE-heat model based on partial differential equations which treats the computation of hidden states as solving a heat equation over time.
arXiv Detail & Related papers (2022-12-28T06:34:32Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Modeling Irregular Time Series with Continuous Recurrent Units [3.7335080869292483]
We propose continuous recurrent units (CRUs) to handle irregular time intervals between observations.
We show that CRU can better interpolate irregular time series than neural ordinary differential equation (neural ODE)-based models.
We also show that our model can infer dynamics from im-ages and that the Kalman gain efficiently singles out candidates for valuable state updates from noisy observations.
arXiv Detail & Related papers (2021-11-22T16:49:15Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Handling Missing Observations with an RNN-based Prediction-Update Cycle [10.478312054103975]
In tasks such as tracking, time-series data inevitably carry missing observations.
This paper introduces an RNN-based approach that provides a full temporal filtering cycle for motion state estimation.
arXiv Detail & Related papers (2021-03-22T11:55:10Z) - Neural Jump Ordinary Differential Equations: Consistent Continuous-Time
Prediction and Filtering [6.445605125467574]
We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time.
We show that our model converges to the $L2$-optimal online prediction.
We experimentally show that our model outperforms the baselines in more complex learning tasks.
arXiv Detail & Related papers (2020-06-08T16:34:51Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.