Tampered VAE for Improved Satellite Image Time Series Classification
- URL: http://arxiv.org/abs/2203.16149v1
- Date: Wed, 30 Mar 2022 08:48:06 GMT
- Title: Tampered VAE for Improved Satellite Image Time Series Classification
- Authors: Xin Cai, Yaxin Bi, Peter Nicholl
- Abstract summary: Pyramid Time-Series Transformer (PTST) operates solely on the temporal dimension.
We propose a classification-friendly VAE framework that introduces clustering mechanisms into latent space.
We hope the proposed framework can serve as a baseline for crop classification with SITS for its modularity and simplicity.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The unprecedented availability of spatial and temporal high-resolution
satellite image time series (SITS) for crop type mapping is believed to
necessitate deep learning architectures to accommodate challenges arising from
both dimensions. Recent state-of-the-art deep learning models have shown
promising results by stacking spatial and temporal encoders. However, we
present a Pyramid Time-Series Transformer (PTST) that operates solely on the
temporal dimension, i.e., neglecting the spatial dimension, can produce
superior results with a drastic reduction in GPU memory consumption and easy
extensibility. Furthermore, we augment it to perform semi-supervised learning
by proposing a classification-friendly VAE framework that introduces clustering
mechanisms into latent space and can promote linear separability therein.
Consequently, a few principal axes of the latent space can explain the majority
of variance in raw data. Meanwhile, the VAE framework with proposed tweaks can
maintain competitive classification performance as its purely discriminative
counterpart when only $40\%$ of labelled data is used. We hope the proposed
framework can serve as a baseline for crop classification with SITS for its
modularity and simplicity.
Related papers
- Paving the way toward foundation models for irregular and unaligned Satellite Image Time Series [0.0]
We propose an ALIgned Sits (ALISE) to take into account the spatial, spectral, and temporal dimensions of satellite imagery.
Unlike SSL models currently available for SITS, ALISE incorporates a flexible query mechanism to project the SITS into a common and learned temporal projection space.
The quality of the produced representation is assessed through three downstream tasks: crop segmentation (PASTIS), land cover segmentation (MultiSenGE) and a novel crop change detection dataset.
arXiv Detail & Related papers (2024-07-11T12:42:10Z) - OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning [57.43911113915546]
Few-Shot Class-Incremental Learning (FSCIL) introduces a paradigm in which the problem space expands with limited data.
FSCIL methods inherently face the challenge of catastrophic forgetting as data arrives incrementally.
We propose the OrCo framework built on two core principles: features' orthogonality in the representation space, and contrastive learning.
arXiv Detail & Related papers (2024-03-27T13:30:48Z) - Disentangling Spatial and Temporal Learning for Efficient Image-to-Video
Transfer Learning [59.26623999209235]
We present DiST, which disentangles the learning of spatial and temporal aspects of videos.
The disentangled learning in DiST is highly efficient because it avoids the back-propagation of massive pre-trained parameters.
Extensive experiments on five benchmarks show that DiST delivers better performance than existing state-of-the-art methods by convincing gaps.
arXiv Detail & Related papers (2023-09-14T17:58:33Z) - GaitASMS: Gait Recognition by Adaptive Structured Spatial Representation
and Multi-Scale Temporal Aggregation [2.0444600042188448]
Gait recognition is one of the most promising video-based biometric technologies.
We propose a novel gait recognition framework, denoted as GaitASMS.
It can effectively extract the adaptive structured spatial representations and naturally aggregate the multi-scale temporal information.
arXiv Detail & Related papers (2023-07-29T13:03:17Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Revisiting the Encoding of Satellite Image Time Series [2.5874041837241304]
Image Time Series (SITS)temporal learning is complex due to hightemporal resolutions and irregular acquisition times.
We develop a novel perspective of SITS processing as a direct set prediction problem, inspired by the recent trend in adopting query-based transformer decoders.
We attain new state-of-the-art (SOTA) results on the Satellite PASTIS benchmark dataset.
arXiv Detail & Related papers (2023-05-03T12:44:20Z) - ViTs for SITS: Vision Transformers for Satellite Image Time Series [52.012084080257544]
We introduce a fully-attentional model for general Satellite Image Time Series (SITS) processing based on the Vision Transformer (ViT)
TSViT splits a SITS record into non-overlapping patches in space and time which are tokenized and subsequently processed by a factorized temporo-spatial encoder.
arXiv Detail & Related papers (2023-01-12T11:33:07Z) - STIP: A SpatioTemporal Information-Preserving and Perception-Augmented
Model for High-Resolution Video Prediction [78.129039340528]
We propose a Stemporal Information-Preserving and Perception-Augmented Model (STIP) to solve the above two problems.
The proposed model aims to preserve thetemporal information for videos during the feature extraction and the state transitions.
Experimental results show that the proposed STIP can predict videos with more satisfactory visual quality compared with a variety of state-of-the-art methods.
arXiv Detail & Related papers (2022-06-09T09:49:04Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.